Git
External Links
- @[https://git-scm.com/book/en/v2]
- @[https://learnxinyminutes.com/docs/git/]

Related:
See UStore: Distributed Storage with rich semantics!!!
@[https://arxiv.org/pdf/1702.02799.pdf]
Who-is-who
  (Forcibly incomplete but still quite pertinent list of core people and companies)
- Linus Torvald:  
  L.T. initiated the project to fix problems with distributed
  development of the Linux Kernel.
- Junio C. Hamano:  lead git maintainer (+8700 commits)
 @[https://git-blame.blogspot.com/]

Full Journey

Setup Server⅋Clients
- Non-normative ssh access to Git server
 ──────────────────────────────────────────┬──────────────────────────────────────────────────────────
 ºSTEP 1:º                                 │ ºSTEP 2:º
 SSH Server                                │ remote client/s
 ──────────────────────────────────────────┼──────────────────────────────────────────────────────────
  #!/bin/bash                              │   GIT_SSH_COMMAND="ssh "   # ← ENV.VAR To tune SSH *1
                                           │   GIT_SSH_COMMAND="$GIT_SSH_COMMAND Oº-oPort=1234º"
  if [[ $EUID != 0 ]] ; then               │   GIT_SSH_COMMAND="$GIT_SSH_COMMANDGº-i ~/.ssh/key07.keyº"
    echo "exec as root/sudo"               │   GIT_SSH_COMMAND="$GIT_SSH_COMMAND ... "
    exit 1                                 │
  fi                                       │   GIT_URL="myRemoteSSHServer"
  TEAM=team01                              │ BºGIT_URL="${GIT_URL}/var/lib/my_git_team"º
  addgroup ${TEAM}                         │ GºGIT_URL="${GIT_URL}/ourFirstProject"º
  for USER in lyarzas earizonb ; do        │                                                         
     grep "^${USER}:" /etc/passwd          │  ºgit cloneº GºmyUser1º@${GIT_URL}
     if [[ $? != 0 ]]; then                │       ^^^^^
       useradd ${USER} \                   │       create working copy of bare/non-bare repository
          --shell=/usr/bin/git-shell \     │
          --groups ${TEAM} \               │ºMake branch appear on shell prompt :º(☜strongly recomended)
          --password ${SECRET}             │(Must be done just once)
     fi                                    │ ModifyºPS1 promptº(Editing $HOME/.bashrc) to look like:
     # Add to group                        │ PS1="\h[\$(git branch 2˃/dev/null | grep ^\* | sed 's/\*/branch:/')]@\$(pwd |rev| awk -F / '{print \$1,\$2}' | rev | sed s_\ _/_) \$ "
     usermod -a -G ${TEAM} ${USER}         │          └─────────────    ºshow git branchº   ───────────────────┘   └────────────── show current and parent dir. only ────────┘
  done                                     │          $(command ...): bash syntax that executes command ...
                                           │                          and replaces standard output dynamically
BºBASE_GIT_DIR=/var/lib/${TEAM}º           │                          in PS1
GºPROJECT_NAME=project01º                  │  host1 $                           ← PROMPT BEFORE:                          
  mkdir -p ${BASE_GIT_DIR}/${PROJECT_NAME} │  host01[branch: master]@dir1/dir2  ← PROMPT AFTER:
  pushd .                                  │
  cd ${BASE_GIT_DIR}/${PROJECT_NAME} ;     │
  gitºinit --bareº                         │
  popd                                     │
  FIND="find ${BASE_GIT_DIR}"              │
  find ${BASE_GIT_DIR}         \           │
   -exec chown -R root:${TEAM} {} \;       ← Fix group
  find ${BASE_GIT_DIR} -type d \           │
   -exec chmod g+rwx {}           \;       ← Fix permissions
  find ${BASE_GIT_DIR} -type f \           │
   -exec chmod g+rw  {}           \;       ← Fix permissions 
 ──────────────────────────────────────────┴──────────────────────────────────────────────────────────
*1:@[https://stackoverflow.com/questions/5767850/git-on-custom-ssh-port/50854760#50854760]


Common flows
OºFLOW 1:º(Simplest one) no one else pushed changes before our push)
local ─→ git status ─→ git add . ─→ºgit commitº──────────────────────────────────────────────────────────→ºgit push origin featureXº
edit           ^             ^              ^                                                                     ^
               │             │              │                                                                     │
               │         add file/s         │                                                           push to remote  repository
               │         to next commit     │                                                          (ussually origin) and branch
               │                            │                                                          (featureX, master,...)      
           display changes               commit
           pending to commit             new version


OºFlow 2:ºsomeone else pushed changed before our push but there are no conflicts (each user edited different files)

local ─→ git status ─→ git add . ─→ºgit commitº─→ git pull ──────────────────────────────────────────────→ºgit push origin featureXº
edit                                               ^
                                                   │
                                         - git will abort and warn that changes has been pushed
                                           to remote repository+branch if we try to skip this step.
                                         - Otherwise an automatic merge is done with our local
                                           changes and any other user remote changes.

OºFlow 3:ºsomeone else pushed changed before our push, but there are  conflicts (each user edited one or more common files)

local ─→ git status ─→ git add . ─→ºgit commitº─→ git pull  ─→ "fix conflicts" ─→ git add ─→ git commit ─→ºgit push origin featureXº
edit                                                                     ^                   ^      
                                                                   │                   │      
                                                                   │             Tell git that
                                                                   │             conflicts were
                                                                   │             resolved
                                                                   │                          
                                                           manually edit   
                                                           conflicting changes

OºFlow 4:ºAmend local commit
local → git status ─→ git add . ─→ºgit commitº─→ git commit ─amend ─→ ... ─→ git commit ────────────────→ºgit push origin featureXº
edit  


OºGit-Flowº Meta-Flow using widely accepted branches rules to treat with
            common issues when releasing software and managing versions
            REF: @[https://nvie.com/posts/a-successful-git-branching-model/]
 ┌────────────────┬───────────────────────────────────
 │ Standarized    │ Intended use
 │ Branch names   │                                
 ]────────────────┼───────────────────────────────────
 │feature/...     │ merged back into main body of code
 │                │ when the developer/s are confident
 │                │ with the code quality.
 │                │ If asked to switch to another task just
 │                │ commit changes to this feature/... branch
 │                │ to continue later on.
 ├────────────────┼───────────────────────────────────
 │develop         │ Release Staging Area:
 │                │ Merge here feature/... completed features
 │                │ NOT yet been released.
 ├────────────────┼───────────────────────────────────
 │release         │ stable (release tagged branch)
 ├────────────────┼───────────────────────────────────
 │hotfix branches │ branches from a tagged release.
 │                │ Fix quickly, merge to release
 │                │ and tag in release with new minor version.
 │                │ Ideally never used since our released
 │                │ software has no bugs ;D 
 └────────────────┴───────────────────────────────────

branching
Change branch (checkout)
$ git checkout -b newBranch       ← alt 1, -b: creates new local branch
$ git checkout    existingBranch  ← alt 2,   : switch to existing local branch 
$ git branch -av                  ←  List (-a)ll existing branches
$ git branch -d branchToDelete    ← -d: Delete branch

$ git checkout --track "remote/branch"  ← Create  new tracking branch (TODO)

View Change History
$ git log -n 10           ← -n 10. See only 10 last commits.
$ git log -p path_to_file ← See log for file with line change details (-p: Patch applied)

Tags
$ git tag                 ← List tags
→ ...
→ v2.1.0-rc.2
→ v2.1.1
→ v2.1.2
→ ...
$ git tag -a v1.4 -m "..." ← Create annotated tag (recomended)
                                    ^^^^^^^^^
                                - stored as full objects in Git database.
                                - They’re checksummed; contain the tagger name,
                                - email, and date; have a tagging message (-m).
                                - can be signed and verified with GPG.

$ git tag v1.4-lw          ← Create lightweight tag
                                    ^^^^^^^^^^^    
                                  - "alias" for a commit checksum stored in a file
                                  - No other info is kept.

$ git tag -a v1.2 9fceb02  ← Tag some commit in history

ºSharing Tagsº
WARN: git push command doesn’t transfer tags to remote servers. 

$ git push origin v1.5    ← Share/push tag to remote repo
$ git push origin --tags  ← Share/push all the tags
$ git tag -d v1.4-lw      ← Delete local tag (remote tags will persist)
$ git push origin --delete v1.4-lw    ← Delete remote tag. Alt 1
$ git push origin :refs/tags/v1.4-lw  ← Delete remote tag. Alt 2
                  ^^^^^^^^^^^^^^^^^^
                  null value before the colon is
                  being pushed to the remote tag name,
                  effectively deleting it.
$ git checkout v1.4-lw          ← Move back to (DETACHED) commit
$ git show-ref --tags    ← Map tag to commit
→ ...
→ 75509731d28ddbbb6f6cbec6e6b50aeaa413df69 refs/tags/v2.1.0-rc.2
→ 8fc0a3af313d9372fc9b8d3e5dc57b804df0588e refs/tags/v2.1.1
→ 3e1f5b0d4d081db7b40f9817c060ee7220a51633 refs/tags/v2.1.2
→ ...

Comparing diffs
TODO:


Filter-branch
- rewrite history in Git using several of the built-in filters that the command provides.   
  - rev-list command, which provides a way to list out revisions based on a range or criteria.
  - filter-branch: split a subdirectory into a separate repository
  - how to use to use filter-branch to delete a file from all versions in a repository 
    and change the email address on versions in Git history
gitbase
Revert Changes
Debug Changes
grep/bisect/blame
REF: @[https://git-scm.com/book/en/v2/Appendix-C:-Git-Commands-Debugging]
merge vs rebase vs cherry-pick
REF:
@[https://stackoverflow.com/questions/9339429/what-does-cherry-picking-a-commit-with-git-mean]
@[https://git-scm.com/docs/git-cherry-pick]


       A → B → C → D → E → HEAD branch01
               │ 
               └─→ H → I → J      branch02
       ───────────────────────────────────────────────────────────────────────
ºMERGE   º"mix" full list of commits

        A → B → C → D → E → HEAD branch01        $ git checkout branch01
                │             ↑                  $ git merge    branch02
                └─→ H → I → J ┘  branch02
       ───────────────────────────────────────────────────────────────────────
ºREBASE:º"Append" full list of commits to head

    A → B → C → D → E → → H → I → J →         HEAD branch01    
                                             
                                             $ git checkout branch01                                        
                                             $ git merge    branch02

──────────────────────────────────────────────────────────────────────────────
ºCHERRY-PICK:º"Pick unique-commit" from branch and apply to another branch

    A → B → C → D →ºEº→ → HEAD  branch01   
            │                                $ git checkout branch02
            └─→ H → I → J →ºEº  branch02     $ git cherry-pick -x branch02ºHEAD~2º
                                                              └┬┘ 
                                              - Useful is "source" branch is 
                                                public. Generates 
                                                standardized commit message 
                                                allowing co-workers to still 
                                                keep track of the origin of 
                                                the commit avoiding merge 
                                                conflicts in the future
                                              - Notes attached to the commit do NOT
                                                follow the cherry-pick. Use
                                                $ git notes copy "from" "to"
Notes
@[http://alblue.bandlem.com/2011/11/git-tip-of-week-git-notes.html]
pretty branch print
@[https://stackoverflow.com/questions/1057564/pretty-git-branch-graphs]

$ git log --all --decorate --oneline --graph

$ git log --graph --abbrev-commit --decorate --date=relative --all
Gitea (Gogs)
painless self-hosted Git service
- Fork of gogs, since it was unmaintained.
Quick-clone
$ git clone --depth=1 ${URL_to_Git_repo}
            ^^^^^^^^^
            "fast clone"
            Create shallow clone with
            history truncated to the
            specified number of commits. 
            Implies --single-branch
            to clone submodules shallowly,
            use also --shallow-submodules.


Quick-tag-clone

$ git clone --depth=1   --branch '1.3.2'   --single-branch ${URL_to_Git_repo}
                        ^^^^^^^^^^^^^^^^   ^^^^^^^^^^^^^^^
                        point to branch    Clone only history
                        hash or tag        leading to the tip
                        default to HEAD)   of a single branch




Git LFS (Large Files extension)
- Git Large File Storage (LFS) replaces large files such as audio samples,
  videos, datasets, and graphics with text pointers inside Git, while storing 
  the file contents on a remote server like GitHub.com or GitHub Enterprise
4 secrets encryption tools
@[https://www.linuxtoday.com/security/4-secrets-management-tools-for-git-encryption-190219145031.html]
Encrypt Git repos
@[https://www.atareao.es/como/cifrado-de-repositorios-git/]
Garbage
Collector
-  Git occasionally does garbage collection as part of its normal operation, 
by invoking git gc --auto. The pre-auto-gc hook is invoked just before the 
garbage collection takes place, and can be used to notify you that this is 
happening, or to abort the collection if now isn’t a good time.
Scalable
Git VFS
@[https://github.com/Microsoft/VFSForGit]
@[https://vfsforgit.org/]
- Microsoft project to enable managing massive Git 
  repositories possible. (hundreds of Gigabytes).
GPG signed
commits
@[https://dev.to/sdmg15/gpg-signing-your-git-commits-3epc]
Git Hooks
Client Hooks
@[https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks]

Client-Side Hooks
- not copied when you clone a repository
  - to enforce a policy do on the server side
- committing-workflow hooks:
  -ºpre-commitºhook:
    - First script to be executed.
    - used to inspect the snapshot that's about to be committed.
      - Check you’ve NOT forgotten something
      - make sure tests run
      - Exiting non-zero from this hook aborts the commit
    (can be bypassed with git commit --no-verify flag)
  -ºprepare-commit-msgºhook:
    - Params:
      - commit_message_path (template for final commit message)
      - type of commit
      - commit SHA-1 (if this is an amended commit)
    - run before the commit message editor is fired up 
      but after the default message is created.
    - It lets you edit the default message before the
      commit author sees it.
    - Used for non-normal-commits with auto-generated messages
      - templated commit messages
      - merge commits
      - squashed commits
      - amended commits
  -ºcommit-msgºhook:
      - commit_message_path (written by the developer)
  -ºpost-commitºhook:
    - (you can easily get the last commit by running git log -1 HEAD)
    - Generally, this script is used for notification or something similar.

-ºemail-workflowº hooks:
  - invoked by ºgit amº
                ^^^^^^
                Apply a series of patches from a mailbox
                prepared by git format-patch

  -ºapplypatch-msgº: 
    - Params:
      - temp_file_path containing the proposed commit message.
  -ºpre-applypatchº:
    - confusingly, it is run after the patch is 
      applied but before a commit is made.
    - can be used it to inspect the snapshot before making the commit,
      run tests,  inspect the working tree with this script.
  -ºpost-applypatchº:
    - runs after the commit is made.
    - Useful to notify a group or the author of the patch
      you pulled in that you’ve done so. 

- Others:
  -ºpre-rebaseºhook:
    - runs before you rebase anything
    - Can be used to disallow rebasing any commits
      that have already been pushed.
  -ºpost-rewriteºhook:
    - Params:
      - command_that_triggered_the_rewrite: 
        - It receives a list of rewrites on stdin.
    - run by commands that replace commits
      such as 'git commit --amend' and 'git rebase'
      (though not by git filter-branch).
    - This hook has many of the same uses as the
      post-checkout and post-merge hooks.
  -ºpost-checkoutºhook:
    - Runs after successful checkout
    - you can use it to set up your working directory
      properly for your project environment.
      This may mean moving in large binary files that 
      you don't want source controlled, auto-generating
      documentation, or something along those lines.
  -ºpost-mergeºhook:
    - runs after a successful merge command.
    - You can use it to restore data in the working tree
      that Git can't track, such as permissions data.
      It can likewise validate the presence of files 
      external to Git control that you may want copied 
      in when the working tree changes.
  -ºpre-pushºhook:
    - runs during git push, after the remote refs
      have been updated but before any objects have
      been transferred.
    - It receives the name and location of the remote
      as parameters, and a list of to-be-updated refs
      through stdin.
    - You can use it to validate a set of ref updates before
      a push occurs (a non-zero exit code will abort the push).
Server-Side Hooks
(system administrator only)
- Useful to enforce nearly any kind of policy for your project.

- can exit non-zero at any time to reject the push 
  as well as print an error message back to the client; 

- you can set up a push policy that's as complex as you wish.

ºpre-receiveº hook:
 - first script to run 
 - takes a list of references that are being pushed from stdin;
   if it exits non-zero, none of them are accepted. 
 - You can use this hook to do things like make sure none of the updated 
   references are non-fast-forwards, or to do access control for all the refs 
   and files they’re modifying with the push.

ºupdateº
 - very similar to the pre-receive script, except that 
  ºit's run once for each branch the pusher is trying to updateº.
 - If the pusher is trying to push to multiple branches, pre-receive runs only once,
   whereas update runs once per branch they're pushing to.
 - Instead of reading from stdin, this script takes three arguments:
   - the name of the reference (branch),
   - the SHA-1 that reference pointed to before the push, 
   - the SHA-1 the user is trying to push.
 - If the update script exits non-zero, only that reference is rejected; 
   other references can still be updated.

ºpost-receiveº
 - runs after the entire process is completed 
 - can be used to update other services or notify users.
 - It takes the same stdin data as the pre-receive hook.
 - Examples include emailing a list, notifying a CI server,
   or updating a ticket-tracking system 
   You can even parse the commit messages to see if any
   tickets need to be opened, modified, or closed.
 - This script can't stop the push process, but the client 
   doesn't disconnect until it has completed, so be careful
   if you try to do anything that may take a long time.
Advanced
revert/rerere
Submodules
Subtrees
- TODO: how subtrees differ from submodules
- how to use the subtree to create a new project from split content
Interactive rebase
-  how to rebase functionality to alter commits in various ways.
- how to squash multiple commits down into one. 
Supporting files
- Git attributes file and how it can be used to identify binary files,
  specify line endings for file types, implement custom filters, and 
  have Git ignore specific file paths during merging.
Cregit token level blame
@[https://www.linux.com/blog/2018/11/cregit-token-level-blame-information-linux-kernel]
cregit: Token-Level Blame Information for the Linux Kernel
Blame tracks lines not tokens, cgregit blames on tokens (inside a line)
Implementations
JGit (client)
@[https://wiki.eclipse.org/JGit/User_Guide]
- Eclipse Distribution License - v 1.0
- lightweight, pure Java library implementing the Git version control system
  - repository access routines
  - network protocols
  - core version control algorithms

- suitable for embedding in any Java application
Gerrit (by Google) 
@[https://www.gerritcodereview.com/index.html]
Gerrit is a Git Server that provides:
- Code Review:
  - One dev. writes code, another one is asked to review it.
    (Goal is cooperation, not fauilt-finding)
  @[https://docs.google.com/presentation/d/1C73UgQdzZDw0gzpaEqIC6SPujZJhqamyqO1XOHjH-uk/]
  - UI for seing changes.
  - Voting pannel.


- Access Control on the Git Repositories.
- Extensibility through Java plugins.
@[https://www.gerritcodereview.com/plugins.html]


Gerrit does NOT provide:
- Code Browsing
- Code SEarch
- Project Wiki
- Issue Tracking
- Continuous Build
- Code Analyzers
- Style Checkers
GitPython
@[https://gitpython.readthedocs.io/en/stable/tutorial.html]
Non-Classified
git-pw
@[http://jk.ozlabs.org/projects/patchwork/]
@[https://www.collabora.com/news-and-blog/blog/2019/04/18/quick-hack-git-pw/]
- git-pw requires patchwork v2.0, since it uses the 
  new REST API and other improvements, such as understanding
  the difference between patches, series and cover letters,
  to know exactly what to try and apply.

- python-based tool that integrates git and patchwork.

  $ pip install --user git-pw

CONFIG:
  $ git config pw.server https://patchwork.kernel.org/api/1.1
  $ git config pw.token YOUR_USER_TOKEN_HERE

ºDaily work exampleº
finding and applying series
- Alternative 1: Manually
  - We could use patchwork web UI search engine for it.
    - Go to "linux-rockchip" project 
    - click on _"Show patches with" to access the filter menu.
    - filter by submitter. 

- Alternative 2: git-pw (REST API wrapper)
  - $ git-pw --project linux-rockchip series list "dynamically"
    → ID    Date         Name              Version   Submitter
    → 95139 a day ago    Add support ...   3         Gaël PORTAY
    → 93875 3 days ago   Add support ...   2         Gaël PORTAY
    → 3039  8 months ago Add support ...   1         Enric Balletbo i Serra


  - Get some more info:
    $ git-pw series show 95139
    → Property    Value
    → ID          95139
    → Date        2019-03-21T23:14:35
    → Name        Add support for drm/rockchip to dynamically control the DDR frequency.
    → URL         https://patchwork.kernel.org/project/linux-rockchip/list/?series=95139
    → Submitter   Gaël PORTAY
    → Project     Rockchip SoC list
    → Version     3
    → Received    5 of 5
    → Complete    True
    → Cover       10864561 [v3,0/5] Add support ....
    → Patches     10864575 [v3,1/5] devfreq: rockchip-dfi: Move GRF definitions to a common place.
    →     10864579 [v3,2/5] : devfreq: rk3399_dmc: Add rockchip, pmu phandle.
    →     10864589 [v3,3/5] devfreq: rk3399_dmc: Pass ODT and auto power down parameters to TF-A.
    →     10864591 [v3,4/5] arm64: dts: rk3399: Add dfi and dmc nodes.
    →     10864585 [v3,5/5] arm64: dts: rockchip: Enable dmc and dfi nodes on gru.


  - Applying the entire series (or at least trying to):
    $ git-pw series apply 95139
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    fetch all the patches in the series, and apply them in the right order.
GIT Commit Standard Emojis
@[https://gist.github.com/parmentf/035de27d6ed1dce0b36a]
ºCommit type               Emoji                    Graphº
 Initial commit           :tada:                      🎉 
 Version tag              :bookmark:                  🔖 
 New feature              :sparkles:                  ✨ 
 Bugfix                   :bug:                       🐛 
 Metadata                 :card_index:                📇 
 Documentation            :books:                     📚 
 Documenting src          :bulb:                      💡 
 Performance              :racehorse:                 🐎 
 Cosmetic                 :lipstick:                  💄 
 Tests                    :rotating_light:            🚨 
 Adding a test            :white_check_mark:          ✅ 
 Make a test pass        :heavy_check_mark:           ✔️  
 General update           :zap:                       ⚡️ 
 Improve format           :art:                       🎨 
 /structure                                              
 Refactor code            :hammer:                    🔨 
 Removing stuff           :fire:                      🔥 
 CI                       :green_heart:               💚 
 Security                 :lock:                      🔒 
 Upgrading deps.         :arrow_up:                   ⬆️  
 Downgrad. deps.         :arrow_down:                 ⬇️  
 Lint                     :shirt:                     👕 
 Translation              :alien:                     👽 
 Text                     :pencil:                    📝 
 Critical hotfix          :ambulance:                 🚑 
 Deploying stuff          :rocket:                    🚀 
 Work in progress         :construction:              🚧 
 Adding CI build system   :construction_worker:       👷 
 Analytics|tracking code  :chart_with_upwards_trend:  📈 
 Removing a dependency    :heavy_minus_sign:          ➖ 
 Adding a dependency      :heavy_plus_sign:           ➕ 
 Docker                   :whale:                     🐳 
 Configuration files      :wrench:                    🔧 
 Package.json in JS       :package:                   📦 
 Merging branches         :twisted_rightwards_arrows: 🔀 
 Bad code / need improv.  :hankey:                    💩 
 Reverting changes        :rewind:                    ⏪ 
 Breaking changes         :boom:                      💥 
 Code review changes      :ok_hand:                   👌 
 Accessibility            :wheelchair:                ♿️ 
 Move/rename repository  :truck:                      🚚 
GitHub: Custom Bug/Feature-request templates

$ cat .github/ISSUE_TEMPLATE/bug_report.md
 | ---
 | name: Bug report
 | about: Create a report to help us improve
 | title: ''
 | labels: ''
 | assignees: ''
 | 
 | ---
 | 
 | **Describe the bug**
 | A clear and concise description of what the bug is.
 | 
 | **To Reproduce**
 | Steps to reproduce the behavior:
 | 1. Go to '...'
 | 2. Click on '....'
 | 3. Scroll down to '....'
 | 4. See error
 | 
 | **Expected behavior**
 | A clear and concise description of what you expected to happen.
 | 
 | ...
  
$ cat .github/ISSUE_TEMPLATE/feature_request.md
  | ---
  | name: Feature request
  | about: Suggest an idea for this project
  | title: ''
  | labels: ''
  | assignees: ''
  | 
  | ---
  | 
  | **Is your feature request related to a problem? Please describe.**
  | A clear and concise description of what the problem is.... 
  | 
  | **Describe the solution you'd like**
  | A clear and concise description of what you want to happen.
  | 
  | **Describe alternatives you've considered**
  | A clear and concise description of any alternative solutions or features you've considered.
  | 
  | **Additional context**
  | Add any other context or screenshots about the feature request here.
Shell
Reference Script
Source: @[https://github.com/earizon/utility_shell_scripts/blob/master/scriptTemplate.sh]

#!/bin/bash

OUTPUT="$(basename $0).log"
exec 3˃⅋1   # Copy current STDOUT to ⅋3
exec 4˃⅋2   # Copy current STDERR to ⅋4
echo "Redirecting STDIN/STDOUT to $OUTPUT"
# exec 1˃$OUTPUT 2˃⅋1  
# REF: https://unix.stackexchange.com/questions/145651/using-exec-and-tee-to-redirect-logs-to-stdout-and-a-log-file-in-the-same-time
exec ⅋˃ ˃(tee -a "$OUTPUT") # Reditect STDOUT/STDERR to file
exec 2˃⅋1  
echo "This will be logged to the file and to the screen"


GLOBAL_EXIT_STATUS=0
WD=$(pwd)

LOCK="/tmp/exampleLock"
function funCleanUp() {
  set +e
  echo "Cleaning resource and exiting"
  rm -f $LOCK  
}
trap funCleanUp EXIT   # ← Clean any resource on exit

if [ ! ${STOP_ON_ERR_MSG} ] ; then
  STOP_ON_ERR_MSG=true
fi
ERR_MSG=""
function funThrow {
    if [[ $STOP_ON_ERR_MSG != false ]] ; then
      echo "ERR_MSG DETECTED: Aborting now due to " 
      echo -e ${ERR_MSG} 
      if [[ $1 != "" ]]; then
          GLOBAL_EXIT_STATUS=$1 ; 
      elif [[ $GLOBAL_EXIT_STATUS == 0 ]]; then
          GLOBAL_EXIT_STATUS=1 ;
      fi
      exit $GLOBAL_EXIT_STATUS
    else
      echo "ERR_MSG DETECTED: "
      echo -e ${ERR_MSG}
      echo "WARN: CONTINUING WITH ERR_MSGS "

      GLOBAL_EXIT_STATUS=1 ;
    fi
    ERR_MSG=""
}

while [  $#  -gt 0 ]; do  # $#  number of arguments
  case "$1" in
    -l|--list)
      echo "list arg"
      shift 1  # ºconsume arg         ←   $# = $#-1 
      ;; -p|--port) export PORT="${2}:"
      shift 2  #  consume arg+value   ←   $# = $#-2 
      ;;
    -h|--host)
      export HOST="${2}:"
      shift 2  #  consume arg+value   ←   $# = $#-2 
      ;;
    *)
      echo "non-recognised option '$1'"
      shift 1  #  consume arg         ←   $# = $#-1 
  esac
done
set -e # exit on ERR_MSG

function preChecks() {
  # Check that ENV.VARs and parsed arguments are in place
  if [[ ! ${HOME} ]] ; then ERR_MSG="HOME ENV.VAR NOT DEFINED" ; funThrow 41 ; fi
  if [[ ! ${PORT} ]] ; then ERR_MSG="PORT ENV.VAR NOT DEFINED" ; funThrow 42 ; fi
  if [[ ! ${HOST} ]] ; then ERR_MSG="HOST ENV.VAR NOT DEFINED" ; funThrow 43 ; fi
  set -u # From here on, ANY UNDEFINED VARIABLE IS CONSIDERED AN ERROR.
}

function funSTEP1 {
  echo "STEP 1: $HOME, PORT:$PORT, HOST: $HOST"
}
function funSTEP2 { # throw ERR_MSG
  ERR_MSG="My favourite ERROR@funSTEP2"
  funThrow 2
}


cd $WD ; preChecks
cd $WD ; funSTEP1
cd $WD ; funSTEP2

echo "Exiting with status:$GLOBAL_EXIT_STATUS"
exit $GLOBAL_EXIT_STATUS
Init Vars
complete Shell parameter expansion list available at:
- @[http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html]
var1=$1 # init var $1 with first param
var1=$# # init var $1 with number of params
var1=$! # init var with PID of last executed command.
var1=${parameter:-word} # init $var1 with $parameter value or 'word'(or word expansion) if parameter unset or null
var1=${parameter:=word} # init $var1 with $parameter value or 'word'(or word expansion) if parameter unset or null. Also asign word to $parameter
var1=${parameter:?word} # If parameter is null/unset word (or word expansion) is written to the STDERR and exits.
var1=${parameter:+word} # If parameter is null or unset, nothing is substituted, otherwise the expansion of word is substituted.
${parameter:offset}
${parameter:offset:length}
# Substring Expansion. It expands to up to length characters of the value
  of parameter starting at the character specified by offset.
  If parameter is '@', an indexed array subscripted by '@' or '*', or an
  associative array name, the results differ as described below.

Parse arguments
#Oº$#º number of arguments
while [Oº$#º -gt 0 ]; do
  echo $1
  case "$1" in
    -l|--list)
      echo "list arg"
      shift 1  # ºconsume arg         ← Oº$# = $#-1º
      ;;
    -p|--port)
      export PORT="${2}:"
      echo "port: $PORT"
      shift 2  # ºconsume arg+valueº  ← Oº$# = $#-2º
      ;;
    *)
      echo "non-recognised option"
      shift 1  # ºconsume argº        ← Oº$# = $#-1º
  esac
done
jj
Temporal Files
TMP_FIL=$(mktemp)  
TMP_DIR=$(mktemp --directory)

Barrier synchronization
UUID:[9737647d-58dc-4999-8db4-4cd3c2682edd] 
Wait for background jobs to complete example:
(
  ( sleep 3 ; echo "job 1 ended" ) &
  ( sleep 1 ; echo "job 2 ended" ) &
  ( sleep 1 ; echo "job 3 ended" ) &
  ( sleep 9 ; echo "job 4 ended" ) &
  wait ${!}       # alt.1: Wait for all background jobs to complete
# wait %1 %2 %3   # alt.2: Wait for jobs 1,2,3. Do not wait for job 4
  echo "All subjobs ended"
) &
bash REPL loop
REPL stands for Read-eval-print loop: More info at:
@[https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop]
    # Define the list of a menu item
   ºselectºOºlanguageººinºC# Java PHP Python Bash Exit
   ºdoº
      #Print the selected value
      if [[ Oº$languageº == "Exit" ]] ; then
        exit 0
      fi
      echo "Selected language is $language"
   ºdoneº
trap: Exit script cleanly
@[https://www.putorius.net/using-trap-to-exit-bash-scripts-cleanly.html]
Bash-it
@[https://www.tecmint.com/bash-it-control-shell-scripts-aliases-in-linux/]
- bundle of community Bash commands and scripts for Bash 3.2+,
  which comes with autocompletion, , aliases, custom functions, ....
- It offers a useful framework for developing, maintaining and
  using shell scripts and custom commands for your daily work.
Bash 4+ Maps
#!/usr/bin/env bash

declare -A animals                             # STEP !: declare associative array "animals"
animals=( ["key1"]="value1" ["key2"]="value2") # Init with some elements


Then use them just like normal arrays. Use animals['key']='value' to set value, "${animals[@]}" to expand the values, and "${!animals[@]}" (notice the !) to expand the keys. Don't forget to quote them:

echo "${animals[moo]}"
for sound in "${!animals[@]}"; do echo "$sound - ${animals[$sound]}"; done

Bash 3
test
(man test summary from GNU coreutils)

test

  EXPRESSION  # ← EXPRESSION true/false sets the exit status.
[ EXPRESSION ]

-n STRING                  # STRING length ˃0
                           # (or just STRING)
-z STRING                  #  STRING length == 0
STRING1 = STRING2          # String equality
STRING1 != STRING2         # String in-equality


INTEGER1 -eq INTEGER2      # ==
INTEGER1 -ge INTEGER2      # ˂=
INTEGER1 -gt INTEGER2
INTEGER1 -le INTEGER2
INTEGER1 -lt INTEGER2
INTEGER1 -ne INTEGER2
^^^^^^^^
BºNOTE:º INTEGER can be -l STRING (length of STRING)

ºFILE TEST/COMPARISIONº
RºWARN:º Except -h/-L, all FILE-related tests dereference symbolic links.
-e FILE                    #ºFILE existsº
-f FILE                    # FILE exists and is a1regular fileº
-h FILE                    # FILE exists and is aºsymbolic linkº (same as -L)
-L FILE                    #                                     (same as -h)
-S FILE                    # FILE exists and is aºsocketº
-p FILE                    #ºFILE exists and is a named pipeº
-s FILE                    # FILE exists and has aºsize greater than zeroº


-r FILE                    # FILE exists andºread  permissionºis granted
-w FILE                    # FILE exists andºwrite permissionºis granted
-x FILE                    # FILE exists andºexec  permissionºis granted

FILE1  -ef FILE2           # ← same device and inode numbers
FILE1 -nt FILE2            # FILE1 is newer (modification date) than FILE2
FILE1 -ot FILE2            # FILE1 is older (modification date) than FILE2
-b FILE                    # FILE exists and is block special
-c FILE                    # FILE exists and is character special
-d FILE                    #ºFILE exists and is a directoryº
-k FILE                    # FILE exists and has its sticky bit set


-g FILE                    # FILE exists and is set-group-ID
-G FILE                    # FILE exists and is owned by the effective group ID
-O FILE                    # FILE exists and is owned by the effective user ID
-t FD   file descriptor FD is opened on a terminal
-u FILE FILE exists and its set-user-ID bit is set

BOOLEAN ADITION
RºWARNº: inherently ambiguous.  Use
EXPRESSION1 -a EXPRESSION2 # AND # 'test EXPR1 ⅋⅋ test EXPR2' is prefered
EXPRESSION1 -o EXPRESSION2 # OR  # 'test EXPR1 || test EXPR2' is prefered


RºWARN,WARN,WARNº: your shell may have its own version of test and/or '[',
                   which usually supersedes the version described here.
                   Use /usr/bin/test to force non-shell ussage.

Full documentation at: @[https://www.gnu.org/software/coreutils/]


GitOps
Single Src of Truth
@[https://www.weave.works/blog/gitops-operations-by-pull-request]
GitOps is implemented by using the Git distributed version control system 
(DVCS) as aºsingle source of truthºfor declarative infrastructure and 
applications. Every developer within a team can issue pull requests against a 
Git repository, and when merged, a "diff and sync" tool detects a difference 
between the intended and actual state of the system. Tooling can then be 
triggered to update and synchronise the infrastructure to the intended state.
Jenkins
External Links
@[https://jenkins.io/doc/]
@[https://jenkins.io/doc/book/]
@[https://jenkins.io/user-handbook.pdf]
@[https://github.com/sahilsk/awesome-jenkins]

- @[https://jenkins.io/doc/book/using/using-credentials/]         Using credentials
- @[https://jenkins.io/doc/book/pipeline/running-pipelines]       Running Pipelines
- @[https://jenkins.io/doc/book/pipeline/multibranch]             Branches and Pull Requests
- @[https://jenkins.io/doc/book/pipeline/docker]                  Using Docker with Pipeline
- @[https://jenkins.io/doc/book/pipeline/shared-libraries]        Extending with Shared Libraries
- @[https://jenkins.io/doc/book/pipeline/development]             Pipeline Development Tools
- @[https://jenkins.io/doc/book/pipeline/syntax]                  Pipeline Syntax
- @[https://jenkins.io/doc/book/pipeline/pipeline-best-practices] Pipeline Best Practices
- @[https://jenkins.io/doc/book/pipeline/scaling-pipeline]        Scaling Pipelines
- @[https://jenkins.io/doc/book/blueocean]                        Blue Ocean
- @[https://jenkins.io/doc/book/blueocean/getting-started]        Getting started with Blue Ocean
- @[https://jenkins.io/doc/book/blueocean/creating-pipelines]     Creating a Pipeline
- @[https://jenkins.io/doc/book/blueocean/dashboard]              Dashboard
- @[https://jenkins.io/doc/book/blueocean/activity]               Activity View
- @[https://jenkins.io/doc/book/blueocean/pipeline-run-details]   Pipeline Run Details View
- @[https://jenkins.io/doc/book/blueocean/pipeline-editor]        Pipeline Editor
- @[https://jenkins.io/doc/book/managing]                         Managing Jenkins
- @[https://jenkins.io/doc/book/managing/system-configuration]    Configuring the System
- @[https://jenkins.io/doc/book/managing/security]                Managing Security
- @[https://jenkins.io/doc/book/managing/tools]                   Managing Tools
- @[https://jenkins.io/doc/book/managing/plugins]                 Managing Plugins
- @[https://jenkins.io/doc/book/managing/cli]                     Jenkins CLI
- @[https://jenkins.io/doc/book/managing/script-console]          Script Console
- @[https://jenkins.io/doc/book/managing/nodes]                   Managing Nodes
- @[https://jenkins.io/doc/book/managing/script-approval]         In-process Script Approval
- @[https://jenkins.io/doc/book/managing/users]                   Managing Users
- @[https://jenkins.io/doc/book/system-administration]            System Administration
- @[https://jenkins.io/doc/book/system-administration/backing-up] Backing-up/Restoring Jenkins
- @[https://jenkins.io/doc/book/system-administration/monitoring] Monitoring Jenkins
- @[https://jenkins.io/doc/book/system-administration/security]   Securing Jenkins
- @[https://jenkins.io/doc/book/system-administration/with-chef]  Managing Jenkins with Chef
- @[https://jenkins.io/doc/book/system-administration/with-puppet]Managing Jenkins with Puppet

- full list of ENV.VARs:
  ${BASE_JENKINS_URL}/pipeline-syntax/globals#env
Pipeline injected ENV.VARS
- full list of ENV.VARs:
  ${BASE_JENKINS_URL}/pipeline-syntax/globals#env

$env.BUILD_ID       :
$env.BUILD_NUMBER

$env.BUILD_TAG      : String of jenkins-${JOB_NAME}-${BUILD_NUMBER}.
                                                      ^^^^^^^^^^^^ .
                      Useful to subclassify resource/jar/etc output artifacts

$env.BUILD_URL      : where the results of this build can be found
                      Ex.: http://buildserver/jenkins/job/MyJobName/17/

$env.EXECUTOR_NUMBER: Unique number ID for current executor in same machine

$env.JAVA_HOME      : JAVA_HOME configured for a given job

$env.JENKINS_URL    :
$env.JOB_NAME       : Name of the project of this build
$env.NODE_NAME      : 'master', 'slave01',...
$env.WORKSPACE      : absolute path for workspace
Dockerized Jenkins
    docker run \
      --rm \
      -u root \
      -p 8080:8080 \
      -v jenkins-data:/var/jenkins_home \             ← if 'jenkins-data' Docker volumen
      \                                                  doesn't exists it will be created
      \
      -v /var/run/docker.sock:/var/run/docker.sock \  ← Jenkins need control of Docker to
      \                                                 launch new Docker instances during
      \                                                 the build process
      -v "$HOME":/home \
      --name jenkins01 \                              ← Allows to "enter" docker with:
      jenkinsci/blueocean                               $ docker exec -it jenkins01 bash
Export/import jobs

@[https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI]

********************************
* ALT 1: Using jenkins-cli.jar:*
********************************
  ºPRE-REQUISITES:º
   - jenkins-cli.jar version must match Server version
   - jnlp ports need to be open

  JENKINS_CLI="java -jar ${JENKINS_HOME}/war/WEB-INF/jenkins-cli.jar -s ${SERVER_URL}"
  ${JENKINS_CLI}    get-job job01 > job01.xml
  ${JENKINS_CLI} create-job job01 < job01.xml
                                    ^^^^^^^^^
                                - Can be stored in git,...
  RºWARN:º
  - REF: @[https://stackoverflow.com/questions/8424228/export-import-jobs-in-jenkins]
    There are issues with bare naked ampersands in the XML such as
    when you have & in Groovy code.


****************
* ALT 2: CURL  *
****************
SERVER_URL = "http://..."                      # ← Without Authentication.
SERVER_URL = "http://${USER}:${API_TOKEN}@..." # ← With    Authentication.

$ curl -s ${SERVER_URL}/job/JOBNAME/config.xml > job01.xml #ºExportº
                           º^^^^^^^^^^^^^^^^^^º

$ curl -X POST ${SERVER_URL}/createItem?name=JOBNAME' \         #ºImportº
       --header "Content-Type: application/xml" -d  job01.xml

*********************
* ALT 3: Filesystem *
* (backup)          *
*********************
tar cjf _var_lib_jenkins_jobs.tar.bz2 /var/lib/jenkins/jobs
Pipelines
Jenkinsfile
REF: @[https://jenkins.io/doc/book/pipeline/jenkinsfile/]
     @[https://jenkins.io/doc/pipeline/steps/] 
     Reference of (hundreds of) Plugins compatible with Pipeline 
Commented Declarative Syntax Example
    pipeline {
       environment {
           T1 = 'development' ←······ Env.var with global visibility
           CC = """${sh(      ←······ Env.var set from shell STDOUT.
              ºreturnStdout:ºtrue, ←· trailing whitespace appended.
               script: 'echo "clang"' .trim() removes it.
               )}"""

        AWS_ACCESS_KEY_ID     =  ←··· Secret management
         ºcredentialsº('aws-key-id')← Protected by Jenkins.
        AWS_SECRET_ACCESS_KEY =
         ºcredentialsº('...')

       }

       parameters {  ←··············· allows to modifyºat runtimeº
         string(name: 'Greeting',     as ${params.Greeting}
                defaultValue: 'Hello',
                description: 'Hi!')
       }

      agent any    ←················· allocate anºexecutor and workspaceº
                                      It ensures that the src. repo. is imported to
                                      the workspace for folliwing stages
      stages {
        stage('Build') { ←··········· transpile/compile/package/... using
                                      (make, maven, gradle, ...) plugin

            environment {    ←······· Env.var with local stage visibility,
                                      also available to invoqued shell scripts.
              msg1 = "Building..."
              EXIT = """←············ Init to returned status code from shell
              ${sh(                   execution.
               ºreturnStatus:ºtrue,
                script: 'exit 1'
              )}"""
            }
            steps {
            echo "º${msg1}º:..."←···· shell like interpolation for double-coutes
                sh 'printenv'   ←···· msg1 and EXIT available here
                sshagent (
                  crendentials: ['key1']  ←····┬─ ssh with help of agent
                )                              │  (ssh-agent plugin needed)
                {                              │
                   sh 'ssh user@remoteIP' ←····┘
                }

            }
        }
        stage('Test') {
          steps {
              echo 'Testing..'
          }
        }
        stage('Deploy') {
          when {
            expression {
              currentBuild.result == null
           || currentBuild.result == 'SUCCESS'
            }
          }
          steps {
              sh 'make publish'
          }
        }
      }
      post { ←······················· BºHandling errorsº
          always {
              junit '**/target/*.xml'
          }
         ºfailureº{
              mail to:
                 team@example.com,
              subject: '...'
          }
          unstable { ...  }
          success  { ...  }
          failure  { ...  }
          changed  { ...  }
      }
    }

──────────────   ────────────   ────────────────────────────
    INPUT      →  PROCESSING  →  OUTPUT
──────────────   ────────────   ────────────────────────────
Jenkinsfile       Jenking       -ºarchived built artifactsº
                                -ºtest resultsº
                                -ºfull console outputº


For complex secrets (SSH keys, binary secrets,...)
use the realted Snippet Generators:
  GENERATOR             PARAMS
- SSH User Private Key  - Key File Variable
                        - Passphrase Variable
                        - Username Variable
────────────────────────────────────────────────────────────
- Credentials           SSH priv/pub keys stored in Jenkins.
────────────────────────────────────────────────────────────
- (PKCS#12) Certificate - Keystore Variable
                          Jenkins temporary assign it to the
                          secure location of the certificate's
                          keystore
                        - Password Variable (Opt)
                        - Alias Variable (Opt)
                        - Credentials: Cert.credentials stored
                          in Jenkins. The value of this field
                          is the credential ID, which Jenkins
                          writes out to the generated snippet.
────────────────────────────────────────────────────────────
- Docker client cert    - Handle Docker Host Cert.Auth.



Multiagent
Usefull from multi-target builds/tests/...
pipeline {
 ºagent noneº
  stages {
    stage('clone') {
      // REF: @[https://jenkins.io/doc/pipeline/steps/workflow-scm-step/]
      checkout Gºscmº ←··········· checkout code from scm ("git clone ...")
                           Gºscmº: special var. telling to use the same
                                   repository/revision used to checkout
                                   (git clone) the Jenkinsfile
      checkout poll: false,
               scm: [
                 $class: 'GitSCM',
                 branches: [[name: 'dev']],
                 doGenerateSubmoduleConfigurations: false,
                 extensions: [],
                 submoduleCfg: [],
                 userRemoteConfigs: [
                   [url: 'https://github.com/user01/project01.git',
                    credentialsId: 'UserGit01']
                 ]
               ]
    }
    stage('Build') {
     ºagent anyº
      steps {
        ...
      Oºstashºincludes: '**/target/*.jar', name:º'app'º
    }                                             ^
   }                                              │
   stage('Linux') {               ┌───────────────┘
    ºagent { label 'linux' }º     │
     steps {                      │
      Oºunstashºº'app'º←·········· copy named stash
        sh '...'                  Jenkins master → Current WorkSp.
      }                           · Note:Oºstashº = something put away for future use
      post { ...  }               ·      (In practice: Named cache of generated artifacts
    }                             ·       during same pipeline for reuse
    stage('Test on Windows') {    ·       in further steps). Once the pipeline is
     ºagent { label 'windows' }º  ·       finished, it is removed.
      steps {                     ·
        unstashº'app'º←············
        bat '...'
      }
      post { ...  }
    }
  }
}

Groovy Syntax Tips
git  key1: 'value1', key2: 'value2'   // ← sort form
git([key1: 'value1', key2: 'value2']) // ← long form

sh          'echo hello'     // ← sort form. Valid syntax for single param
sh([script: 'echo hello'])   // ← long form.


Parallel execution
stage('Test') {

 ºparallelº ←····················· Execute linux in parallel
 ºlinux:º{                         with windows
    node('linux') {
      try {
        unstash 'app' ←············· Copy
        sh 'make check'
      }
      finally {
        junit '**/target/*.xml'
      }
    }
  },
 ºwindows:º{
    node('windows') {
      /* .. snip .. */
    }
  }
}

git checkout summary
    checkout([
      $class    : 'GitSCM',
      poll      : false,
      branches  : [[name: commit]],
      extensions: [
        [$class: 'RelativeTargetDirectory', relativeTargetDir: reponame],
┌──→    [$class: 'CloneOption', reference: "/var/cache/${reponame}"]
│     ],
│     submoduleCfg: [],
│     userRemoteConfigs: [
│       [credentialsId: 'jenkins-git-credentials', url: repo_url]
│     ],
│     doGenerateSubmoduleConfigurations: false,
│   ])
└─CloneOption Class:
  - shallow (boolean) : do NOT download history              (Save time/disk)
  - noTags  (boolean) : do NOT download tags                 (Save time/disk)
                        (use only what specified in refspec)
  - depth (int)       : Set shallow clone depth              (Save time/disk)
  - reference(String) : local folder with existing repository
                        used by Git during clone operations.
  - timeout  (int)    : timeout for clone/fetch ops.
  - honorRefspec(bool): initial clone using given refspec   (Save time/disk)


End-to-End Multibranch Pl.
@[https://jenkins.io/doc/tutorials/build-a-multibranch-pipeline-project/]

PREREQUISITES
-ºGitº
- Docker

┌──────────────┬────────────┬────────────┐
│ INPUT        → JENKINS    → OUTPUT     │
│ ARTIFACTS    →            → ARTIFACTS  │
├──────────────┼────────────┼────────────┤
│ Node.js      │ build→test │ development│
│ React app    │            │ production │
│ npm          │            │            │
└──────────────┴────────────┴────────────┘

STEP 1) Setup local git repository
 - clone:
 $ git clone https://github.com/?????/building-a-multibranch-pipeline-project
                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                      Forked from
                                    @[https://github.com/jenkins-docs/building...]
 - Creak dev/pro branches:
   $ git branch development
   $ git branch production

STEP 2) Add 'Jenkinsfile' stub (agent, stages sections) to repo
       (initially in master branch)

STEP 3) Create new Pipeline in Jenkins Blue Ocean Interaface
        browse to "http://localhost:8080/"
          → click "Create a new Pipeline"
            → Choose "Git" in "In Where do you store your code?"
              → Repository URL: "/home/.../building-a-multibranch-pipeline-project"
                → Save

    Blue Ocean will detect the presence of the "Jenkinsfile" stub
    in each branch and will run each Pipeline against its respective branch,


STEP 4) Start adding functionality to the Jenkinsfile pipeline
        (commit to git once edited)
    pipeline {
        environment {
          docker_caching = 'HOME/.m2:/root/.m2'  ←  Cache to speed-up builds
          docker_ports   = '-p 3000:3000 -p 5000:5000'  ←  Cache to speed-up builds
        }
        agent {
            docker {
                image 'node:6-alpine'            ←   Good Enough to build simple
                                                     Node.js+React apps
                args '' ←   dev/pro port where the app will
                                                     listen for requests. Used during
                                                     functional testing

            }
        }
        environment {
            CI = 'true'
        }
        stages {
            stage('Build') {
                steps {
                    sh 'npm install'             ←  1st real build command
                }
            }
            stage('Test') {
                steps {
                    sh './jenkins/scripts/test.sh'
                }
            }
        }
    }

STEP 5) Click "run" icon of the master branch of your Pipeline project,
        and check the result.

STEP 6) Add "deliver" and "deploy" stages to the Jenkinsfile Pipeline
        (and commit changes)
       ºJenkins will selectively execute based on the branch that Jenkins is building fromº

      + stage('Deliver for development') {
      +    ºwhen {º
      +    º    branch 'development'º
      +    º}º
      +     steps {
      +         sh './jenkins/scripts/deliver-for-development.sh'
      +         input message: 'Finished using the web site? (Click "Proceed" to continue)'
      +         sh './jenkins/scripts/kill.sh'
      +     }
      + }
      + stage('Deploy for production') {
      +    ºwhen {º
      +    º    branch 'production'º
      +    º}º
      +     steps {
      +         sh './jenkins/scripts/deploy-for-production.sh'
      +         input message: 'Finished using the web site? (Click "Proceed" to continue)'
      +         sh './jenkins/scripts/kill.sh'
      +     }
      + }
Ex Pipeline script 
@[https://jenkins.io/doc/pipeline/steps/pipeline-build-step/]
build job: 'Pipeline01FromJenkinsfileAtGit', propagate: true, wait: false
build job: 'Pipeline02FromJenkinsfileAtGit', propagate: true, wait: false
build job: 'Pipeline03FromJenkinsfileAtGit', propagate: true, wait: false
                                                        ^^^^
                                result of step is that of downstream build
                                (success, unstable, failure, not built, or aborted).

                                false →  step succeeds even if the downstream build failed
                                         use result property of the return value as needed.

Jenkinless Pipeline
Jenkinsfile-runner:
- Executing Jenkinsfile pipeline without the need of having a Jenkin server 
  running (and wasting memory).
@[https://jenkins.io/blog/2019/02/28/serverless-jenkins/]
@[https://github.com/jenkinsci/jenkinsfile-runner]
Unordered
AWS EC2 plugin
@[https://wiki.jenkins.io/display/JENKINS/Amazon+EC2+Fleet+Plugin]
- launch Amazon EC2 Spot Instances as worker nodes
  automatically scaling the capacity with the load.
Monitor GiHub/BitBucket Organization
Organization  Folders  enable  Jenkins  to  monitor  an  entire  GitHub  
Organization,  or  BitbucketTeam/Project  and  automatically  create  new  
Multibranch  Pipelines  for  repositories  which  contain branches and pull 
requests containing a Jenkinsfile.Currently,  this  functionality  exists  only 
for  GitHub  and  Bitbucket,  with  functionality  provided  by the 
plugin:github-organization-folder[GitHub Organization Folder] and 
plugin:cloudbees-bitbucket-branch-source[Bitbucket Branch Source] plugins
Serverless
@[https://medium.com/@jdrawlings/serverless-jenkins-with-jenkins-x-9134cbfe6870]
TODO:
Zuul
REF: IBM OpenStack Engineer Urges Augmenting Jenkins with Zuul for Hyperscale Projects
[https://thenewstack.io/ibm-openstack-engineer-urges-cncf-consider-augmenting-jenkins-zuul/]

@[https://zuul-ci.org/]
- Use the same Ansible playbooks to
  deploy your system and run your tests.


REF:@[https://www.mediawiki.org/wiki/Continuous_integration/Zuul]
"""...Zuul is a python daemon which acts as a gateway between
Gerrit and Jenkins. It listens to Gerrit stream-events feed and
trigger jobs function registered by Jenkins using the Jenkins Gearman
plugin. The jobs triggering specification is written in YAML and
hosted in the git repository integration/config.git as /zuul/layout.yaml """
Customize History Saving Policy
@[https://stackoverflow.com/questions/60391327/is-it-possible-in-jenkins-to-keep-just-first-and-last-failures-in-a-row-of-con]

Use Case: We are just interesing in keeping "build" changes when the execution
   changes from "success execution" to "failure". That's is, if w  have a history like:

   t1  t2  t3  t4  t5  t6  t7  t8  t9  t10 t11 t12 t13 t14 t15
   -----------------------------------------------------------
   OK, OK, OK, OK, KO, KO, KO, KO, OK, OK, OK, OK, KO, KO, OK
   ^               ^               ^               ^       ^
   status          status          status          status  status
   change          change          change          change  change


   We want to keep history just for:
   t1              t5              t9              t13     t15
   -----------------------------------------------------------
   OK,             KO,             OK,             KO,     OK

To allow this history saving policy a groovy job-post-build step is needed:

 Ex: discard all successful builds of a job except for the last 3 ones
     (since typically, you're more interested in the failed runs)

     def allSuccessfulBuilds = manager.build.project.getBuilds().findAll {
         it.result?.isBetterOrEqualTo( hudson.model.Result.SUCCESS )
     }
     
     allSuccessfulBuilds.drop(3).each {
       it.delete()
     }
CircleCI
CircleCI Ex
REF:
@[https://github.com/interledger4j/ilpv4-connector/blob/master/.circleci/config.yml]

cat .circleci/config.yml
# Java Maven CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-java/ for more details
#
version: 2
jobs:

  # This job builds the entire project and runs all unit tests (specifically the persistence tests) against H2 by
  # setting the `spring.datasource.url` value. All Integration Tests are skipped.
  build:
    working_directory: ~/repo

    docker:
      # Primary container image where all commands run
      - image: circleci/openjdk:8-jdk
        environment:
          # Customize the JVM maximum heap limit
          MAVEN_OPTS: -Xmx4096m

    steps:

      # apply the JCE unlimited strength policy to allow the PSK 256 bit key length
      # solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
      - run:
          name: Getting JCE unlimited strength policy to allow the 256 bit keys
          command: |
            curl -L --cookie 'oraclelicense=accept-securebackup-cookie;'  http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
            unzip -o /tmp/jce_policy.zip -d /tmp
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar

      - checkout # check out source code to working directory

      # Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
      # https://circleci.com/docs/2.0/caching/
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "pom.xml" }}

      - run:
          name: Full Build (H2)
          command:  mvn dependency:go-offline -DskipITs install

      - save_cache: # saves the project dependencies
          paths:
            - ~/.m2
          key: v1-dependencies-{{ checksum "pom.xml" }}

      # save tests
      - run:
          name: Save test results
          command: |
            mkdir -p ~/junit/
            find . -type f -regex ".*/target/surefire-reports/.*xml" -exec cp {} ~/junit/ \;
            mkdir -p ~/checkstyle/
            find . -type f -regex ".*/target/checkstyle-reports/.*xml" -exec cp {} ~/junit/ \;

          when: always

      - store_test_results:
          path: ~/junit

      - store_artifacts:
          path: ~/junit

      # publish the coverage report to codecov.io
      - run: bash <(curl -s https://codecov.io/bash)

  # This job runs specific Ilp-over-HTTP Integration Tests (ITs) found in the `connector-it` module.
  # by executing a special maven command that limits ITs to the test-group `IlpOverHttp`.
  integration_tests_ilp_over_http:
    working_directory: ~/repo

    machine:
      image: ubuntu-1604:201903-01

    environment:
      MAVEN_OPTS: -Xmx4096m
      JAVA_HOME: /usr/lib/jvm/jdk1.8.0/

    steps:

      # apply the JCE unlimited strength policy to allow the PSK 256 bit key length
      # solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
      - run:
          name: Getting JCE unlimited strength policy to allow the 256 bit keys
          command: |
            curl -L --cookie 'oraclelicense=accept-securebackup-cookie;'  http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
            unzip -o /tmp/jce_policy.zip -d /tmp
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar
      - checkout # check out source code to working directory

      # Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
      # https://circleci.com/docs/2.0/caching/
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "pom.xml" }}

      # gets the project dependencies and installs sub-module deps
      - run:
          name: Install Connector Dependencies
          command: mvn dependency:go-offline -DskipTests -DskipITs install

      - save_cache: # saves the project dependencies
          paths:
            - ~/.m2
          key: v1-dependencies-{{ checksum "pom.xml" }}

      - run:
          name: Run Integration Tests (ITs)
          command: |
            cd ./connector-it
            docker network prune -f
            mvn verify -Pilpoverhttp

      # publish the coverage report to codecov.io
      - run: bash <(curl -s https://codecov.io/bash)

  # This job runs specific Settlement-related Integration Tests (ITs) found in the `connector-it` module.
  # by executing a special maven command that limits ITs to the test-group `Settlement`.
  integration_tests_settlement:
    working_directory: ~/repo

    machine:
      image: ubuntu-1604:201903-01

    environment:
      MAVEN_OPTS: -Xmx4096m
      JAVA_HOME: /usr/lib/jvm/jdk1.8.0/

    steps:

      # apply the JCE unlimited strength policy to allow the PSK 256 bit key length
      # solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
      - run:
          name: Getting JCE unlimited strength policy to allow the 256 bit keys
          command: |
            curl -L --cookie 'oraclelicense=accept-securebackup-cookie;'  http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
            unzip -o /tmp/jce_policy.zip -d /tmp
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar
      - checkout # check out source code to working directory

      # Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
      # https://circleci.com/docs/2.0/caching/
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "pom.xml" }}

      # gets the project dependencies and installs sub-module deps
      - run:
          name: Install Connector Dependencies
          command: mvn dependency:go-offline -DskipTests -DskipITs install

      - save_cache: # saves the project dependencies
          paths:
            - ~/.m2
          key: v1-dependencies-{{ checksum "pom.xml" }}

      - run:
          name: Run Integration Tests (ITs)
          command: |
            cd ./connector-it
            docker network prune -f
            mvn verify -Psettlement

      # publish the coverage report to codecov.io
      - run: bash <(curl -s https://codecov.io/bash)

  # This job runs specific Coordination-related Integration Tests (ITs) found in the `connector-it` module.
  # by executing a special maven command that limits ITs to the test-group `Coordination`.
  integration_tests_coordination:
    working_directory: ~/repo

    machine:
      image: ubuntu-1604:201903-01

    environment:
      MAVEN_OPTS: -Xmx4096m
      JAVA_HOME: /usr/lib/jvm/jdk1.8.0/

    steps:

      # apply the JCE unlimited strength policy to allow the PSK 256 bit key length
      # solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
      - run:
          name: Getting JCE unlimited strength policy to allow the 256 bit keys
          command: |
            curl -L --cookie 'oraclelicense=accept-securebackup-cookie;'  http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
            unzip -o /tmp/jce_policy.zip -d /tmp
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
            sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar
      - checkout # check out source code to working directory

      # Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
      # https://circleci.com/docs/2.0/caching/
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "pom.xml" }}

      # gets the project dependencies and installs sub-module deps
      - run:
          name: Install Connector Dependencies
          command: mvn dependency:go-offline -DskipTests -DskipITs install

      - save_cache: # saves the project dependencies
          paths:
            - ~/.m2
          key: v1-dependencies-{{ checksum "pom.xml" }}

      - run:
          name: Run Integration Tests (ITs)
          command: |
            cd ./connector-it
            docker network prune -f
            mvn verify -Pcoordination

      # publish the coverage report to codecov.io
      - run: bash <(curl -s https://codecov.io/bash)

  docker_image:
    working_directory: ~/repo

    machine:
      image: ubuntu-1604:201903-01

    environment:
      MAVEN_OPTS: -Xmx4096m
      JAVA_HOME: /usr/lib/jvm/jdk1.8.0/

    steps:
      - checkout
      - restore_cache:
          keys:
            - v1-dependencies-{{ checksum "pom.xml" }}
      - run:
          name: Deploy docker image
          command: mvn verify -DskipTests -Pdocker,dockerHub -Dcontainer.version=nightly -Djib.httpTimeout=60000 -Djib.to.auth.username=${DOCKERHUB_USERNAME} -Djib.to.auth.password=${DOCKERHUB_API_KEY}

workflows:
  version: 2

  # In CircleCI v2.1, when no workflow is provided in config, an implicit one is used. However, if you declare a
  #  workflow to run a scheduled build, the implicit workflow is no longer run. You must add the job workflow to your
  # config in order for CircleCI to also build on every commit.
  commit:
    jobs:
      - build
      - integration_tests_ilp_over_http:
          requires:
            - build
      - integration_tests_settlement:
          requires:
            - build
      - integration_tests_coordination:
          requires:
            - build

  nightly:
    triggers:
      - schedule:
          cron: "0 0 * * *"
          filters:
            branches:
              only:
                - master
    jobs:
      - build
      - integration_tests_ilp_over_http:
          requires:
            - build
      - integration_tests_settlement:
          requires:
            - build
      - integration_tests_coordination:
          requires:
            - build
      - docker_image:
          requires:
            - integration_tests_ilp_over_http
            - integration_tests_settlement
            - integration_tests_coordination
Nexus
Nexus Repository Management
https://blog.sonatype.com/using-nexus-3-as-your-repository-part-1-maven-artifacts
https://blog.sonatype.com/using-nexus-3-as-your-repository-part-2-npm-packages
https://blog.sonatype.com/using-nexus-3-as-your-repository-part-3-docker-images
Ansible
External Links
- User Guide:
@[https://docs.ansible.com/ansible/latest/user_guide/index.html]
- Playbooks best practices:
@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html]

Ronald Kurr has lot of very-useful and professional Ansible powered code to 
provide JVM, Python, Desktops, ... machines. For example:
 - Ansible Study Group Labs
@[https://github.com/kurron/ansible-study-group-labs]
 - An OpenVPN server in the cloud
   https://github.com/kurron/aws-open-vpn/blob/master/ansible/playbook.yml
 - Installation of tools than any self-respecting Operation person loves and needs.
   https://github.com/kurron/ansible-role-operations/blob/master/tasks/main.yml
 - Installation of tools than any self-respecting JVM developer loves and needs.
   https://github.com/kurron/ansible-role-jvm-developer/blob/master/tasks/main.yml
 - Installation of tools than any self-respecting AWS command-line user loves and needs.
 @[https://github.com/kurron/ansible-role-aws/blob/master/tasks/main.yml]
 - Connect to a Juniper VPN under Ubuntu.
 @[https://github.com/kurron/ansible-role-jvpn/blob/master/tasks/main.yml]
 - Installation of tools than any self-respecting Atlassian user loves and needs.
 @[https://github.com/kurron/ansible-role-atlassian/blob/master/tasks/main.yml]
 - Installation of tools than any self-respecting cross-platform .NET developer loves and needs.
 @[https://github.com/kurron/ansible-role-dot-net-developer/blob/master/tasks/main.yml]
 - Docker container that launches a pipeline of Docker containers that 
   ultimately deploy Docker containes via Ansible into EC2 instances
 @[https://github.com/kurron/docker-ec2-pipeline]
 - Increase operating system limits for Database workloads.
 @[https://github.com/kurron/ansible-role-os-limits/blob/master/tasks/main.yml]
 - Creation of an Amazon VPC. Public and private subnets are created
   in all availability zones.
 @[https://github.com/kurron/ansible-role-vpc]

- Command line tools
@[https://docs.ansible.com/ansible/latest/user_guide/command_line_tools.html]

- run a single task 'playbook' against a set of hosts
@[https://docs.ansible.com/ansible/latest/cli/ansible.html]

- ansible-config view, edit, and manage ansible configuration
@[https://docs.ansible.com/ansible/latest/cli/ansible-config.html]

- ansible-console  interactive console for executing ansible tasks
@[https://docs.ansible.com/ansible/latest/cli/ansible-console.html]

- ansible-doc plugin documentation tool
@[https://docs.ansible.com/ansible/latest/cli/ansible-doc.html]

- manage Ansible roles in shared repostories (default to [https://galaxy.ansible.com] )
@[https://docs.ansible.com/ansible/latest/cli/ansible-galaxy.html]

- display/dump configured inventory:
@[https://docs.ansible.com/ansible/latest/cli/ansible-inventory.html]


@[https://docs.ansible.com/ansible/latest/cli/ansible-pull.html]
ansible-pull pulls playbooks from a VCS repo and executes them for the local host
@[https://docs.ansible.com/ansible/latest/cli/ansible-vault.html]
ansible-vault  encryption/decryption utility for Ansible data files

Summary
˂span xsmall˃layout best practices˂/span˃          ║ ºControllerº  1 ←→ N  ┌─→ ºModuleº
(Recommended, non─mandatory)                       ║                       │
best practice file layout approach:                ║ ºMachine   º          │  (community pre─packaged)
────────────────────────────────────────────────   ║  ^                    │ ─ abstracts recurrent system task
production            # inventory file             ║─ host with            │ ─ Provide the real power of Ansible
staging               # inventory file             ║  installed Ansible    │   avoiding custom scripts
                                                   ║  with modules         │ ─ $ ansible─doc "module_name"
group_vars/           # ← assign vars.             ║  prepakaged ←─────────┘ ─ Ex:
                      #   to particular groups.    ║  andºconfig.filesº        user:    name=deploy group=web
  all.yml             # ← Ex:                      ║      └─┬────────┘         ^             ^            ^
  │---                                             ║     1) $ANSIBLE_CONFIG  module   ensure creation of'deploy'
  │ntp: ntp.ex1.com                                ║     2) ./ansible.cfg    name     account in 'web' group
  │backup: bk.ex1.com                              ║     3) ~/.ansible.cfg   (executions are idempotent)
                                                   ║     4) /etc/ansible/ansible.cfg
  webservers.yml     # ← Ex:                       ║     Ex:
  │---                                             ║     [defaults]
  │apacheMaxClients: 900                           ║     inventory = hosts
  │apacheMaxRequestsPerChild: 3000                 ║     remote_user = vagrant
                                                   ║     private_key_file = ~/.ssh/private_key
  dbservers.yml      # ← Ex:                       ║     host_key_checking = False
  │---                                             ║─ "host" inventory file
  │maxConnectionPool: 100                          ║         listing target servers,groups
  │...                                             ║
                                                   ║
host_vars/                                         ║ Role   N  ←────→ 1   Playbook        1 ←─────→ N tasks
   hostname1.yml      # ←assign variables          ║    ^                    ^                         ^
   hostname2.yml      #  to particular systems     ║ Mechanism to          ─ main yaml defining        single proc.
                                                   ║ share files/...         task to be executed       to execute
library/              # (opt) custom modules       ║ for reuse *2          ─ Created by DevOps team
module_utils/         # (opt) custom module_utils  ║@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html]
                      #       to support modules   ║
filter_plugins/       # (opt) filter plugins       ║ºRUN SEQUENCEº
                                                   ║ |playbook| 1←→N |Play| 1 → apply to → N |Hosts|
webservers.yml        # ← Ex playbook:             ║                  ↑        
│---                  #   Map                      ║                  1 
│- hosts: webservers  # ← webservers─group         ║                  └─(contains)→ N |Task| 1→1 |Module|
│                     #   to                       ║  ┌────────────────────────────────┘
│  roles:             # ← roles                    ║  └→ each task is run in parallel across hosts in order
│    - common         #                            ║     waiting until all hosts have completed the task before
│    - webtier        #                            ║     moving to the next.(default exec.strategy, can be switched to "free") 
                                                   ║     | - name: ....                                           
dbservers.yml         # ← Ex playbook for db─tier  ║     |   hosts: groupTarget01                            
site.yml              #ºmaster playbookº           ║     | Oºserial:º   # ←  Alt1: serial schedule-tunning.
│---                    (whole infra)              ║     | Oº  - 1      # ←        first in 1 host                 
│# file: site.yml                                  ║     | Oº  - "10%"  # ←        if OK, runs 10% simultaneously  
│- import_playbook: webservers.yml                 ║     | Oº  - 30     # ←        finally 30 hosts in parallel
│- import_playbook: dbservers.yml                  ║     |  tasks: ...
                                                   ║     |#Bºstrategy: freeº ← Alt2:  Don't wait for other hosts
                                                   ║
˂span title˃Role layout˂/span˃                     ║º|Playbook Play|º
roles/                                             ║  INPUT
├ webtierRole/     # ← same layout that common     ║ |Playbook| → Oºansible─playbookº  → Gather      ────→ exec tasks
│ ...                                              ║                ^                    host facts          │
├ monitoringRole/  # ← same layout that common     ║                exec tasks on       (network,            v
│ ...                                              ║                the target hostº*1º  storage,...)     async Handlers
├─common/          # ← Common Role.                ║                                     └────┬────┘      use to:
│ ├─tasks/         #                               ║                           Ussually gathered facts    service restart, 
│ │ └─ main.yml    #                               ║                           are used for               ...              
│ ├─handlers/      #                               ║                         OºConditionalºInclude. Ex:            
│ │ └─ main.yml    #                               ║                           ...
│ ├─templates/     #                               ║                           -Oºincludeº: Redhat.yml
│ │ └─ ntp.conf.j2 # ← notice .j2 extension        ║                            Oºwhenº: ansible_os_family == 'Redhat'    
│ ├─files/         #                               ║ Reminder: 
│ │ ├─ bar.txt     # ← input to   copy─resource    ║@[https://docs.ansible.com/ansible/2.4/playbooks_reuse_includes.html]
│ │ └─ foo.sh      # ← input to script─resource    ║ "include"        ← evaluated @ playbook parsing
│ ├─vars/          #                               ║ "import"         ← evaluated @ playbook execution
│ │ └─ main.yml    # ← role related vars           ║ "import_playbook"← plays⅋tasks in each playbook
│ ├─defaults/      #                               ║ "include_tasks"                                                      
│ │ └─ main.yml    # ← role related vars           ║ "import_tasks"
│ │                  ← with lower priority         ║
│ ├─meta/          #                               ║ºcommand moduleº
│ │ └─ main.yml    # ← role dependencies           ║─ Ex:
│ ├─library/       # (opt) custom modules          ║. $ ansible server01 -m command -a uptime
│ ├─module_utils/  # (opt) custom module_utils     ║                     ^^^^^^^^^^
│ └─lookup_plugins/# (opt) a given 'lookup_plugins'║                     default module. Can be ommited
│                          is used                 ║  testserver │ success │ rc=0 ⅋⅋
    ...                                            ║  17:14:07 up  1:16,  1 user, load average: 0.16, ...
═══════════════════════════════════════════════════╩════════════════════════════════════════════════════════════════
º*1:º@[https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html]


playbook-layout ºTASK vs ROLES PLAYBOOK LAYOUTº ──────────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────── PLAYBOOK YAML LAYOUT WITHºTASKSº │ PLAYBOOK YAML LAYOUT WITHºROLESº ──────────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────── --- │ ºbased on a well known file structureº. - hosts: webservers ← targeted (ssh) servers │ --- connection: ssh ← :=ssh, localhost,. .. │ - name : my list of Task name │ hosts: database vars: ← yaml-file-scoped var.list │ vars_files: - myYmlVar01 : "myVal01" │ - secrets.yml │ enviroment: ← runtime-scoped env.var.list │ Bº# pre_tasks execute before roles º - myEnvVar01 : "myEnv01" │ Bºpre_tasksº: │ - name: update the apt cache tasks: ← ordered task list to │ apt: update_cache=yes be executed │ - name: install apache2 ← task1 │ roles: apt: | │ - role: BºdatabaseRoleº name=apache2 │ # next vars override those in (vars|defaults)/main.yml update_cache=yes │ database_name: " {{ myProject_ddbb_name }}" state=lates │ database_user: " {{ myProject_ddbb_user }}" notify: │ - { role: consumer, when: tag | default('provider') == 'consumer'} - ºrestart-apache2-idº │ - { role: provider, when: tag | default('provider') == 'provider'} - name: next_task_to_exec │ "module": ... │ │ Bº# post_tasks execute after roles º handlers: ← tasks triggered by events │ Bºpost_tasksº: - name: restart-apache2 ← ºname as a Unique-IDº │ - name: notify Slack service: name=apache2 state=restarted │ local_action: ˃ │ slack - hosts: localhost │ domain=acme.slack.com connection: local │ token={{ slack_token }} gather_facts: False │ msg="database {{ inventory_hostname }} configured" │ vars: │ ... ... │ =========================== │ roles search path: ./roles → /etc/ansible/roles │ role file layout: │ roles/B*databaseRole*/tasks/main.yml │ roles/B*databaseRole*/files/ │ roles/B*databaseRole*/templates/ │ roles/B*databaseRole*/handlers/main.yml │ roles/B*databaseRole*/vars/main.yml # should NO be overrriden │ roles/B*databaseRole*/defaults/main.yml # can be overrriden │ roles/B*databaseRole*/meta/main.yml # dependency info about role ──────────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────── - hosts: web_servers tasks: - shell: /usr/bin/foo Oºregisterº:ºfoo_resultº ←OºSTDOUT exec ouput to ansible varº ignore_errors: True Json schema output depends on module STDOUT to .rc in case. - shell: /usr/bin/bar Use -v on each module to investigate when: ºfoo_resultº.rc == 5 Error Handling - default behavior: - take a host out of the play if a task fails and continue with the other hosts. -Oºserialº, Oºmax_fail_percentageº can be used to define a playbook-play as failed. @[https://docs.ansible.com/ansible/2.5/user_guide/playbooks_delegation.html#maximum-failure-percentage] - Using 'block' (task grouping) inside tasks: - hosts: app-servers Oºmax_fail_percentage:º"10%" ← abort if surpassed. tasks: - name: Take VM out of the load balancer - name: Create a VM snapshot before the app upgrade - block: ← scope error/recovery/rollback - name: Upgrade the application - name: Run smoke tests ºrescue:º - name: Revert a VM to the snapshot after a failed upgrade ºalways:º - name: Re-add webserver to the loadbalancer - name: Remove a VM snapshot
inventory file - Defaults to: /etc/ansible/hosts - if marked as executable (+x) it's executed and the json-output taken as effective-inventory. - script must then support '--host=' and '--list=' flags Ex: hosts inventory file ┌─→ Ex: test("ssh─ping") host in inventory ───────────────────────── │ using 'ping' module: Gºdevelopmentº ←─────────┘ $ ansible -i ./hostsº-m pingº Gºdevelopmentº Oºproductionº [all:vars] group patterns ntp_server=ntp.ubuntu.com Other patterns:A All hosts Oºallº [Oºproductionº:vars] All Oºº* db_primary_host=rhodeisland.example.com Union devOº:ºstaging db_replica_host=virginia.example.com Intersection stagingOº:⅋ºdatabase db_name=widget_production Exclusion devOº:!ºqueue rabbitmq_host=pennsylvania.example.com Wildcard Oºº*.example.com Range webOº[5:10]º [Gºdevelopmentº:vars] Regex O*~web\d+\.example\.(com|org)* db_primary_host=quebec.example.com db_name=widget_staging rabbitmq_host=quebec.example.com [Gºvagrantº:vars] db_primary_host=vagrant3 db_name=widget_vagrant rabbitmq_host=vagrant3 [Gºvagrantº] Gºvagrant1 ansible_host=127.0.0.1 ansible_port=2222º Gºvagrant2 ansible_host=127.0.0.1 ansible_port=2200º [web_group01] Oºgeorgia.example.comº Oºnewhampshire.example.comº Oºnewjersey.example.comº Gºvagrant1º [rabbitmq] Oºpennsylvania.example.comº Gºvagrant2º [django:children] ← Group of groups web_group01 rabbitmq [web_group02] web_group01[01:20].example.com ← ranges web-[a-t].example.com ←
variable "scopes" Playbook Variable Main Scopes -ºGlobal:ºset by config, ENV.VARS and cli -ºPlay :ºeach play and contained structures, vars|vars_files|vars_prompt entries role defaults -ºHost :ºdirectly associated to a host, like inventory, include_vars, facts or registered task outputs Variable scope Overrinding rules: - The more explicit you get in scope, the more precedence 1 command line values (eg “-u user”) º(SMALLEST PRECEDENCE)º 2 role defaults 3 *1 inventory file || script group vars 4 *2 inventory group_vars/all 5 *2 playbook group_vars/all 6 *2 inventory group_vars/* 7 *2 playbook group_vars/* 8 *1 inventory file or script host vars 9 *2 inventory host_vars/* 10 *2 playbook host_vars/* 11 *4 host facts || cached set_facts 12 play vars 13 play vars_prompt 14 play vars_files 15 role vars (defined in role/vars/main.yml) 16 block vars (only for tasks in block) 17 task vars (only for the task) 18 include_vars 19 set_facts || registered vars 20 role (and include_role) params 21 include params 22 (-e) extra vars º(BIGEST PRECEDENCE)º ↑ *1 Vars defined in inventory file or dynamic inventory *2 Includes vars added by ‘vars plugins’ as well as host_vars and group_vars which are added by the default vars plugin shipped with Ansible. *4 When created with set_facts’s cacheable option, variables will have the high precedence in the play, but will be the same as a host facts precedence when they come from the cache.
Must-know Modules
1) Package management
- module for major package managers (DNF, APT, ...)
  - install, upgrade, downgrade, remove, and list packages.
  - dnf_module
  - yum_module (required for Python 2 compatibility)
  - apt_module
  - slackpkg_module

  - Ex:
    |- name: install Apache,MariaDB
    |  dnf:                # ← dnf,yum,
    |    name:
    |      - httpd
    |      - mariadb-server
    |    state: latest     # ← !=latest|present|...

2) 'service' module
  - start, stop, and reload installed packages;
  - Ex:
    |- name: Start service foo, based on running process /usr/bin/foo
    |  service:
    |    name: foo
    |    pattern: /usr/bin/foo
    |    state: started     # ← started|restarted|...
    |    args: arg0value 

3) 'copy' module
  - copies file: local_machine → remote_machine
  |- name: Copy a new "ntp.conf file into place, 
  |  copy:
  |    src: /mine/ntp.conf
  |    dest: /etc/ntp.conf
  |    owner: root
  |    group: root
  |    mode: '0644'  # or u=rw,g=r,o=r
  |    backup: yes   # back-up original if different to new

4) 'debug' module (print values to STDOUT/file during execution)
  |- name: Display all variables/facts known for a host
  |  debug:
  |    var: hostvars[inventory_hostname]
  |    verbosity: 4
  |    dest: /tmp/foo.txt   # ← By default to STDOUT
  |    verbosity: 2         # ← optional. Display only with
                                $ ansible-playbook demo.yamlº-vvº

5) 'file' module: manage file and its properties.
    - set attributes of files, symlinks, or directories.
    - removes files, symlinks, or directories.
- Ex: 
  |- name: Change file ownership/group/perm
  |  file:
  |    path: /etc/foo # ← create if needed
  |    owner: foo
  |    group: foo
  |    mode: '0644'
  |    state: file ← file*|directory|...

6) 'lineinfile' module
   - ensures that particular line is in file
   - replaces existing line using regex.
   - Ex:
     |- name: Ensure SELinux is set to enforcing mode
     |  lineinfile:
     |    path: /etc/selinux/config
     |    regexp: '^SELINUX='       # ← (optional) creates if not found.
     |    line: SELINUX=enforcing   # new value, do nothing if found


7) 'git' module
   - manages git checkouts of repositories to deploy files or software.
   - Ex: Create git archive from repo
     |- git:
     |    repo: https://github.com/ansible/ansible-examples.git
     |    dest: /src/ansible-examples
     |    archive: /tmp/ansible-examples.zip

8) 'cli_config'
  -  platform-agnostic way of pushing text-based configurations
     to network devices
     - Ex1:
       | - name: commit with comment
       |   cli_config:
       |     config: set system host-name foo
       |     commit_comment: this is a test
     
     - Ex2:
       set switch-hostname and exits with a commit message.
       |- name: configurable backup path
       |  cli_config:
       |    config: "{{ lookup('template', 'basic/config.j2') }}"
       |    backup: yes
       |    backup_options:
       |      filename: backup.cfg
       |      dir_path: /home/user


9) 'archive' module
   - create compressed archive of 1+ files.
   - Ex:
   |- name: Compress directory /path/to/foo/ into /path/to/foo.tgz
   |  archive:
   |    path:
   |    - /path/to/foo
   |    - /path/wong/foo
   |    dest: /path/to/foo.tar.bz2
   |    format: bz2

10) Command
   - takes the command name followed by a list of space-delimited arguments.
Ex1:
- name: return motd to registered var
  command: cat /etc/motd .. ..  
  become: yes            # ← "sudo"
  become_user: db_owner  # ← effective user
  register: mymotd       # ← STDOUT to Ansible var mymotd
  args:                  # (optional) command-module args 
                         # (vs executed command arguments)
    chdir: somedir/      # ← change to dir 
    creates: /etc/a/b    # ← Execute command if path doesn't exists

@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html]
host fact → Play Vars
- UsingOºsetupº module at play/run-time. Ex:

  tasks:
    - ...
    - name: re-read facts after adding custom fact
    Bºsetup:ºfilter=ansible_local     ← re-run Bºsetup moduleº

$ ansible targetHost01 -m Oºsetupº
(Output will be similar to)
Next facts are available with:
- hosts: ...
Bºgather_facts: yesº ← Will execute the module "setup"

{
  Bº"ansible_os_family": "Debian",   º
  Bº"ansible_pkg_mgr": "apt",        º
  Bº"ansible_architecture": "x86_64",º
  b*"ansible_nodename": "ubuntu2.example.com",
    "ansible_all_ipv4_addresses": [ "REDACTED IP ADDRESS" ],
    "ansible_all_ipv6_addresses": [ "REDACTED IPV6 ADDRESS" ],
    "ansible_bios_date": "09/20/2012",
    ...
    "ansible_date_time": {
        "date": "2013-10-02",
        ...
    },
  Oº"ansible_default_ipv4": {º
  Oº    ...                  º
  Oº},                       º
    ...
    "ansible_devices": {
        "sda": {
            "partitions": {
                ...
                  Oº"size": "19.00 GB",º
            },
            ...
        },
        ...
    },
    ...
    "ansible_env": {
        "HOME": "/home/mdehaan",
      Oº"PWD": "/root/ansible",º
      Oº"SHELL": "/bin/bash",º
        ...
    },
  Oº"ansible_fqdn": "ubuntu2.example.com",º
  Oº"ansible_hostname": "ubuntu2",º
    ...
    "ansible_processor_cores": 1,
    "ansible_ssh_host_key_dsa_public": ...
    ...
}

/etc/ansible/facts.d
(Local provided facts, 1.3+)
Way to provide "locally supplied user values" as opposed to
               "centrally supplied user values"  or
               "locally dynamically determined values"

If any files inside /etc/ansible/facts.d (@remotely managed host)
ending in *.fact (JSON, INI, execs generating JSON, ...) can supply local facts

Ex: /etc/ansible/facts.d/preferences.fact contains:
[general]
asdf=1    ← Will be available as {{ ansible_local.preferences.general.asdf }}
bar=2       (keys are always converted to lowercase)


To copy local facts and make the usable in current play:
- hosts: webservers
  tasks:
    - name: create directory for ansible custom facts
      file: state=directory recurse=yes path=/etc/ansible/facts.d

    - name: install custom ipmi fact
      copy: src=ipmi.fact dest=/etc/ansible/facts.d ← Copy local facts

    - name: re-read facts after adding custom fact
    Bºsetup:ºfilter=ansible_local   ← re-run Bºsetup moduleº to make
                                    ← locals facts available in current play

Lookups: Query ext.data: file sh KeyValDB .. @[https://docs.ansible.com/ansible/latest/user_guide/playbooks_lookups.html] ... vars: motd_value: "{{Oºlookupº(Bº'file'º, '/etc/motd') }}" ^^^^^^ ^^^^ Use lookup One of: modules - file - password - pipe STDOUT of local exec. - env ENV.VAR. - template j2 tpl evaluation - csvfile Entry in .csv file - dnstxt - redis_kv Redis key lookup - etcd etcd key lookup
"Jinja2" template ex.
nginx.conf.j2

server {
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        listen 443 ssl;

        root /usr/share/nginx/html;
        index index.html index.htm;

        server_name         º{{º server_name º}}º;
        ssl_certificate     º{{º cert_file   º}}º;
        ssl_certificate_key º{{º key_file    º}}º;

        location / {
                try_files $uri $uri/ =404;
        }
}
templates/default.conf.tpl
templates/000_default.conf.tpl
|˂VirtualHost *:80˃
|    ServerAdmin webmaster@localhost
|    DocumentRoot {{ doc_root }}
|
|    ˂Directory {{ doc_root }}˃
|        AllowOverride All
|        Require all granted
|    ˂/Directory˃
|˂/VirtualHost˃

Task:
|  - name: Setup default virt.host
|    template: src=templates/default.conf.tpl dest=/etc/apache2/sites-available/000-default.conf

(j2) filters
Oº|º must be interpreted as the "pipe" (input) to filter, not the "or" symbol.
# default if undefined:
- ...
  "HOST": "{{ database_host Oº| default('localhost')º }}"

# fail after some debuging
- ...
  register: result
Oºignore_errors: Trueº
  ...
  failed_when: resultOº| failedº
...
Oºfailed º True if registered value is a failed    task
Oºchangedº True if registered value is a changed   task
Oºsuccessº True if registered value is a succeeded task
Oºskippedº True if registered value is a skipped   task

path filters
Oºbasename  º
Oºdirname   º
Oºexpanduserº  '~' replaced by home dir.
Oºrealpath  º  resolves sym.links
Ej:
  vars:
    homepage: /usr/share/nginx/html/index.html
  tasks:
  - name: copy home page
    copy: ˂
      src={{ homepage Oº| basenameº }}
      dest={{ homepage }}

Custom filters
filter_plugins/surround_by_quotes.py
# From http://stackoverflow.com/a/15515929/742
def surround_by_quote(a_list):
    return ['"%s"' % an_element for an_element in a_list]

class FilterModule(object):
    def filters(self):
        return {'surround_by_quote': surround_by_quote}
notify vs register
@[https://stackoverflow.com/questions/33931610/ansible-handler-notify-vs-register]

  
  some tasks ...                         |     some tasks ...
 ºnotify:ºnginx_restart                  |    ºregister:ºnginx_restart
                                         |     
  # our handler                          |     # do this after nginx_restart changes
  - name: nginx_restart                  |    ºwhen:ºnginx_restart|changed
          ^^^^^^^^^^^^^
        - only fired when 
          tasks report changes
        - only visible in playbook  ← With register task is displayed as skipped
          if actually executed.       if 'when' condition is false.
        - can be called from any
          role.
        - (by default) executed at
          the end of the playbook.
        RºThis can be dangerousºif playbook
          fails midway, handler is NOT 
          notified. Second run can ignore
          the handle since task could have
          not changed now. Actually it will
        RºNOT be idempotentº (unless 
          --force-handler is set ) 
        - To fire at specific point flush
          all handlers by defining a task like:
          - meta: flush_handlers
        - called only once no matter how many
          times it was notified.
Handling secrets
@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_vault.html]
- Ansible vaults use symetric-chiper encryption 

  INPUT               ENCRYPTING                OUTPUT                  Ussage 
                      COMMAND                   (can be added to SCM)   (Play.Execution)
  ──────────────      ─────────────             ────────────────────    ──────────────────
  external pass─┐  ┌→ $ ansible─vault \ (alt1)→ protectedPB.yml   ──┬→ $ ansible-playbook protectedPB.yml \  *1
                │  │    create protectedPB.yml                      │   º--ask-vault-passº                ← alt.A
  secret needed─┤(alt1)                                             │   º--vault-password-fileºpassFileº  ← alt.B_1
  at playbook   └──┤                                                │                          ^^^^^^^^
  execution        │                                                │                   content/exec.STDOUT
                   │                                                │          should be a single-line-string
                   │                                                │   export ANSIBLE_VAULT_PASSWORD_FILE=... ← alt.B_2
                 (alt.2)                                            │
                   │                                                │
                   │                                                │
                   └→ $ ansible-vault \ (alt2)→ yml to be embeded ──┘
                        encrypt_string          into existing playbook
                                                Ex:
                                                → mySecretToEncrypt              
                                                → bla bla blah(Ctrl+D)→ !vault ← C⅋P to a yml file:
                                                →    $ANSIBLE_VAULT;1.1;AES256   - vars:
                                                →    66386439653236336462...       - secret01: !vault |
                                                →    64316265363035303763...                $ANSIBLE_VAULT;1.1;AES256
                                                →           ...                             66386439653236336462...

*1: RºWARN:º Currently requires all files to be encrypted with same password                            
Ex (yum install)
apache@localhost
---
# file: ansible.yml
- hosts: localhost
  connection: local
  gather_facts: False

  vars:
    var_yum_prerequisites: [ 'httpd24'      , 'vim', 'tmux' ]
    var_apt_prerequisites: [ 'apache-server', 'vim', 'tmux' ]

  vars_files:
    - /vars/vars_not_in_git.yml   ←  add to .gitignore
                                     avoid sharing sensitive data
                                     /vars/vars_not_in_git.yml will look like:
                                     password: !vault |
                                               $ANSIBLE_VAULT;1.1;AES256
                                               ...




  tasks:
   - name: install yum pre-requisites
     when: ansible_os_family == "RedHat"
     become: true
     yum:
       name: {{ var_yum_prerequisites }}
       state: present
     notify:
     - restart-apache2

   - name: install apt pre-requisites
     when: ansible_os_family == "Debian"
     become: true
     apt:
       name: {{ var_apt_prerequisites }}
       state: latest
     notify:
     - restart-apache2


  handlers:
  - name: restart-apache2
    service: name=httpd state=restarted
Ex: Installing nginx
web-tls.yml
- name: wait in control host for ssh server to be running
  local_action: wait_for port=22 host="{{ inventory_hostname }}"
    search_regex=OpenSSH

- name: Configure nginx
 ºhosts:º webservers
  become: True
 ºvars:º
    Oºkey_fileº: /etc/nginx/ssl/nginx.key
    Gºcert_fileº: /etc/nginx/ssl/nginx.crt
    Bºconf_fileº: /etc/nginx/sites-available/default
    server_name: localhost
 ºtasks:º
    - name: install nginx
      ºaptº: ºnameº=nginx ºupdate_cacheº=yes

    - name: create directories for ssl certificates
      ºfileº: ºpathº=/etc/nginx/ssl ºstateº=directory

    - name: copy TLS key
      ºcopyº: ºsrcº=files/nginx.key ºdestº={{ Oºkey_fileº }} owner=root ºmodeº=0600
      ºnotifyº: restart nginx

    - name: copy TLS certificate
      ºcopyº: ºsrcº=files/nginx.crt ºdestº={{ Gºcert_fileº }}
      ºnotifyº: restart nginx

    - name: copy config file
      ºcopyº: ºsrcº=files/nginx.confº.j2º ºdestº={{ Bºconf_fileº }}

    - name: enable configuration
      # set attributes of file, symlink or directory
      ºfileº: ºdestº=/etc/nginx/sites-enabled/default ºsrcº={{ Bºconf_fileº }} state=link
    - name: copy index.html
      # template → new file → remote host
      ºtemplateº: ºsrcº=templates/index.html.j2 ºdestº=/usr/share/nginx/html/index.html
        mode=0644

    - name: show a debug message
      debug: "msg='Example debug message: conf_file {{ Bºconf_fileº }} included!'"

    - name: Example to register new ansible variable
      command: whoami
      register: login
    # (first debug helps to know who to write the second debug)
    - debug: var=login
    - debug: msg="Logged in as user {{ login.stdout }}"

    - name: Example to ºignore errorsº
      command: /opt/myprog
      register: result
      ignore_errors: ºTrueº
    - debug: var=result

 ºhandlers:º
    - name: restart nginx
      ºserviceº: ºnameº=nginx ºstateº=restarted
Insanely complet Ansible playbook
@[https://gist.github.com/marktheunissen/2979474]
---
# ^^^ YAML documents must begin with the document separator "---"
#
#### Example docblock, I like to put a descriptive comment at the top of my
#### playbooks.
#
# Overview: Playbook to bootstrap a new host for configuration management.
# Applies to: production
# Description:
#   Ensures that a host is configured for management with Ansible.
#
###########
#
#
# Note:
# YAML, like Python, cares about whitespace.  Indent consistently throughout.
# Be aware! Unlike Python, YAML refuses to allow the tab character for
# indentation, so always use spaces.
#
# Two-space indents feel comfortable to me, but do whatever you like.
# vim:ff=unix ts=2 sw=2 ai expandtab
#
# If you're new to YAML, keep in mind that YAML documents, like XML
# documents, represent a tree-like structure of nodes and text. More
# familiar with JSON?  Think of YAML as a strict and more flexible JSON
# with fewer significant characters (e.g., :, "", {}, [])
#
# The curious may read more about YAML at:
# http://www.yaml.org/spec/1.2/spec.html
#


###
# Notice the minus on the line below -- this starts the playbook's record
# in the YAML document. Only one playbook is allowed per YAML file.  Indent
# the body of the playbook.
-

  hosts: all
  ###########
  # Playbook attribute: hosts
  # Required: yes
  # Description:
  #   The name of a host or group of hosts that this playbook should apply to.
  #
  ## Example values:
  #   hosts: all -- applies to all hosts
  #   hosts: hostname -- apply ONLY to the host 'hostname'
  #   hosts: groupname -- apply to all hosts in groupname
  #   hosts: group1,group2 -- apply to hosts in group1 & group2
  #   hosts: group1,host1 -- mix and match hosts
  #   hosts: *.mars.nasa.gov wildcard matches work as expected
  #
  ## Using a variable value for 'hosts'
  #
  # You can, in fact, set hosts to a variable, for example:
  #
  #   hosts: $groups -- apply to all hosts specified in the variable $groups
  #
  # This is handy for testing playbooks, running the same playbook against a
  # staging environment before running it against production, occasional
  # maintenance tasks, and other cases where you want to run the playbook
  # against just a few systems rather than a whole group.
  #
  # If you set hosts as shown above, then you can specify which hosts to
  # apply the playbook to on each run as so:
  #
  #   ansible-playbook playbook.yml --extra-vars="groups=staging"
  #
  # Use --extra-vars to set $groups to any combination of groups, hostnames,
  # or wildcards just like the examples in the previous section.
  #

  sudo: True
  ###########
  # Playbook attribute: sudo
  # Default: False
  # Required: no
  # Description:
  #   If True, always use sudo to run this playbook, just like passing the
  #   --sudo (or -s) flag to ansible or ansible-playbook.

  user: remoteuser
  ###########
  # Playbook attribute:  user
  # Default: "root'
  # Required: no
  # Description
  #   Remote user to execute the playbook as

  ###########
  # Playbook attribute: vars
  # Default: none
  # Required: no
  # Description:
  #  Set configuration variables passed to templates & included playbooks
  #  and handlers.  See below for examples.
  vars:
    color: brown

    web:
      memcache: 192.168.1.2
      httpd: apache
    # Tree-like structures work as expected, but be careful to surround
    #  the variable name with ${} when using.
    #
    # For this example, ${web.memcache} and ${web.apache} are both usable
    #  variables.

    ########
    # The following works in Ansible 0.5 and later, and will set $config_path
    # "/etc/ntpd.conf" as expected.
    #
    # In older versions, $config_path will be set to the string "/etc/$config"
    #
    config: ntpd.conf
    config_path: /etc/$config

    ########
    # Variables can be set conditionally. This is actually a tiny snippet
    # of Python that will get filled in and evaluated during playbook execution.
    # This expressioun should always evaluate to True or False.
    #
    # In this playbook, this will always evaluate to False, because 'color'
    #  is set to 'brown' above.
    #
    # When ansible interprets the following, it will first expand $color to
    # 'brown' and then evaluate 'brown' == 'blue' as a Python expression.
    is_color_blue: "'$color' == 'blue'"

    #####
    # Builtin Variables
    #
    # Everything that the 'setup' module provides can be used in the
    # vars section.  Ansible native, Facter, and Ohai facts can all be
    # used.
    #
    # Run the setup module to see what else you can use:
    # ansible -m setup -i /path/to/hosts.ini host1
    main_vhost: ${ansible_fqdn}
    public_ip:  ${ansible_eth0.ipv4.address}

    # vars_files is better suited for distro-specific settings, however...
    is_ubuntu: "'${ansible_distribution}' == 'ubuntu'"


  ##########
  # Playbook attribute: vars_files
  # Required: no
  # Description:
  #   Specifies a list of YAML files to load variables from.
  #
  #   Always evaluated after the 'vars' section, no matter which section
  #   occurs first in the playbook.  Examples are below.
  #
  #   Example YAML for a file to be included by vars_files:
  #   ---
  #   monitored_by: phobos.mars.nasa.gov
  #   fish_sticks: "good with custard"
  #   # (END OF DOCUMENT)
  #
  #   A 'vars' YAML file represents a list of variables. Don't use playbook
  #   YAML for a 'vars' file.
  #
  #   Remove the indentation & comments of course, the '---' should be at
  #   the left margin in the variables file.
  #
  vars_files:
    # Include a file from this absolute path
    - /srv/ansible/vars/vars_file.yml

    # Include a file from a path relative to this playbook
    - vars/vars_file.yml

    # By the way, variables set in 'vars' are available here.
    - vars/$hostname.yml

    # It's also possible to pass an array of files, in which case
    # Ansible will loop over the array and include the first file that
    # exists.  If none exist, ansible-playbook will halt with an error.
    #
    # An excellent way to handle platform-specific differences.
    - [ vars/$platform.yml, vars/default.yml ]

    # Files in vars_files process in order, so later files can
    # provide more specific configuration:
    - [ vars/$host.yml ]

    # Hey, but if you're doing host-specific variable files, you might
    # consider setting the variable for a group in your hosts.ini and
    # adding your host to that group. Just a thought.


  ##########
  # Playbook attribute: vars_prompt
  # Required: no
  # Description:
  #   A list of variables that must be manually input each time this playbook
  #   runs.  Used for sensitive data and also things like release numbers that
  #   vary on each deployment.  Ansible always prompts for this value, even
  #   if it's passed in through the inventory or --extra-vars.
  #
  #   The input won't be echoed back to the terminal.  Ansible will always
  #   prompt for the variables in vars_prompt, even if they're passed in via
  #   --extra-vars or group variables.
  #
  #   TODO: I think that the value is supposed to show as a prompt but this
  #   doesn't work in the latest devel
  #
  vars_prompt:
    passphrase: "Please enter the passphrase for the SSL certificate"

    # Not sensitive, but something that should vary on each playbook run.
    release_version: "Please enter a release tag"

  ##########
  # Playbook attribute: tasks
  # Required: yes
  # Description:
  # A list of tasks to perform in this playbook.
  tasks:
    ##########
    # The simplest task
    # Each task must have a name & action.
    - name: Check that the server's alive
      action: ping

    ##########
    # Ansible modules do the work!
    - name: Enforce permissions on /tmp/secret
      action: file path=/tmp/secret mode=0600 owner=root group=root
    #
    # Format 'action' like above:
    # modulename  module_parameters
    #
    # Test your parameters using:
    #   ansible -m $module  -a "$module_parameters"
    #
    # Documentation for the stock modules:
    # http://ansible.github.com/modules.html

    ##########
    # Use variables in the task!
    #
    # Variables expand in both name and action
    - name: Paint the server $color
      action: command echo $color


    ##########
    # Trigger handlers when things change!
    #
    # Ansible detects when an action changes something.  For example, the
    # file permissions change, a file's content changed, a package was
    # just installed (or removed), a user was created (or removed).  When
    # a change is detected, Ansible can optionally notify one or more
    # Handlers.  Handlers can take any action that a Task can. Most
    # commonly they are used to restart a service when its configuration
    # changes. See "Handlers" below for more about handlers.
    #
    # Handlers are called by their name, which is very human friendly.

    # This will call the "Restart Apache" handler whenever 'copy' alters
    # the remote httpd.conf.
    - name: Update the Apache config
      action: copy src=httpd.conf dest=/etc/httpd/httpd.conf
      notify: Restart Apache

    # Here's how to specify more than one handler
    - name: Update our app's configuration
      action: copy src=myapp.conf dest=/etc/myapp/production.conf
      notify:
        - Restart Apache
        - Restart Redis

    ##########
    # Include tasks from another file!
    #
    # Ansible can include a list of tasks from another file. The included file
    # must represent a list of tasks, which is different than a playbook.
    #
    # Task list format:
    #   ---
    #   - name: create user
    #     action: user name=$user color=$color
    #
    #   - name: add user to group
    #     action: user name=$user groups=$group append=true
    #   # (END OF DOCUMENT)
    #
    #   A 'tasks' YAML file represents a list of tasks. Don't use playbook
    #   YAML for a 'tasks' file.
    #
    #   Remove the indentation & comments of course, the '---' should be at
    #   the left margin in the variables file.

    # In this example $user will be 'sklar'
    #  and $color will be 'red' inside new_user.yml
    - include: tasks/new_user.yml user=sklar color=red

    # In this example $user will be 'mosh'
    #  and $color will be 'mauve' inside new_user.yml
    - include: tasks/new_user.yml user=mosh color=mauve

    # Variables expand before the include is evaluated:
    - include: tasks/new_user.yml user=chris color=$color


    ##########
    # Run a task on each thing in a list!
    #
    # Ansible provides a simple loop facility. If 'with_items' is provided for
    # a task, then the task will be run once for each item in the 'with_items'
    # list.  $item changes each time through the loop.
    - name: Create a file named $item in /tmp
      action: command touch /tmp/$item
      with_items:
        - tangerine
        - lemon

    ##########
    # Choose between files or templates!
    #
    # Sometimes you want to choose between local files depending on the
    # value of the variable.  first_available_file checks for each file
    # and, if the file exists calls the action with $item={filename}.
    #
    # Mostly useful for 'template' and 'copy' actions.  Only examines local
    # files.
    #
    - name: Template a file
      action: template src=$item dest=/etc/myapp/foo.conf
      first_available_file:
        # ansible_distribution will be "ubuntu", "debian", "rhel5", etc.
        - templates/myapp/${ansible_distribution}.conf

        # If we couldn't find a distribution-specific file, use default.conf:
        - templates/myapp/default.conf

    ##########
    # Conditionally execute tasks!
    #
    # Sometimes you only want to run an action when a under certain conditions.
    # Ansible will 'only_if' as a Python expression and will only run the
    # action when the expression evaluates to True.
    #
    # If you're trying to run an task only when a value changes,
    # consider rewriting the task as a handler and using 'notify' (see below).
    #
    - name: "shutdown all ubuntu"
      action: command /sbin/shutdown -t now
      only_if: "$is_ubuntu"

    - name: "shutdown the government"
      action: command /sbin/shutdown -t now
      only_if: "'$ansible_hostname' == 'the_government'"

    ##########
    # Notify handlers when things change!
    #
    # Each task can optionally have one or more handlers that get called
    # when the task changes something -- creates a user, updates a file,
    # etc.
    #
    # Handlers have human-readable names and are defined in the 'handlers'
    #  section of a playbook.  See below for the definitions of 'Restart nginx'
    #  and 'Restart application'
    - name: update nginx config
      action: file src=nginx.conf dest=/etc/nginx/nginx.conf
      notify: Restart nginx

    - name: roll out new code
      action: git repo=git://codeserver/myapp.git dest=/srv/myapp version=HEAD branch=release
      notify:
        - Restart nginx
        - Restart application


    ##########
    # Run things as other users!
    #
    # Each task has an optional 'user' and 'sudo' flag to indicate which
    # user a task should run as and whether or not to use 'sudo' to switch
    # to that user.
    - name: dump all postgres databases
      action: pg_dumpall -w -f /tmp/backup.psql
      user: postgres
      sudo: False

    ##########
    # Run things locally!
    #
    # Each task also has a 'connection' setting to control whether a local
    # or remote connection is used.  The only valid options now are 'local'
    # or 'paramiko'.  'paramiko' is assumed by the command line tools.
    #
    # This can also be set at the top level of the playbook.
    - name: create tempfile
      action: dd if=/dev/urandom of=/tmp/random.txt count=100
      connection: local

  ##########
  # Playbook attribute: handlers
  # Required: no
  # Description:
  #   Handlers are tasks that run when another task has changed something.
  #   See above for examples.  The format is exactly the same as for tasks.
  #   Note that if multiple tasks notify the same handler in a playbook run
  #   that handler will only run once.
  #
  #   Handlers are referred to by name. They will be run in the order declared
  #   in the playbook.  For example: if a task were to notify the
  #   handlers in reverse order like so:
  #
  #   - task: touch a file
  #     action: file name=/tmp/lock.txt
  #     notify:
  #     - Restart application
  #     - Restart nginx
  #
  #   The "Restart nginx" handler will still run before the "Restart
  #   application" handler because it is declared first in this playbook.
  handlers:
    - name: Restart nginx
      action: service name=nginx state=restarted

    # Any module can be used for the handler action
    - name: Restart application
      action: command /srv/myapp/restart.sh

    # It's also possible to include handlers from another file.  Structure is
    # the same as a tasks file, see the tasks section above for an example.
- include: handlers/site.yml
Troubleshooting
  Problem ex:
  'django_manage' mondule always returns 'changed: False' for
  some "external" ddbb commands.
  (ºnonºidempotent task)
  Solution:
Oº'changed_when'/'failed_when'º provides hints to Ansible at play time:
- name: init-database
  django_manage:
    command: createdb --noinput --nodata
    app_path: "{{ proj_path }}"
    virtualenv: "{{ venv_path }}"
Oºfailed_whenº: False # ←  avoid stoping execution
  register:Gºresultº
Oºchanged_when:º Gºresult.outº is defined and '"Creating tables" in Gºresult.outº'

- debug: var=result

- fail:
Non-Classifed
Dynamic Inventory
@[https://docs.ansible.com/ansible/2.5/user_guide/intro_dynamic_inventory.html]
(EC2, OpenStack,...)
Fact Caching
@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#fact-caching]

- To benefit from cached facts you will set gather_facts to False in most plays.

- Ansible ships with two persistent cache plugins: redis and jsonfile.

- To configure fact caching using redis, enable it in ansible.cfg as follows:
[defaults]
gathering = smart
fact_caching = redis
fact_caching_timeout = 86400
AWX GUI
@[https://www.howtoforge.com/ansible-awx-guide-basic-usage-and-configuration/]

- AWX is an open source web application that provides a user interface, REST API, 
  and task engine for Ansible. It's the open source version of the Ansible Tower. 
  The AWX allows you to manage Ansible playbooks, inventories, and schedule jobs 
  to run using the web interface. 

- How to Run and Schedule Ansible Playbook Using AWX GUI
@[https://www.linuxtechi.com/run-schedule-ansible-playbook-awx-gui/]
Puppet
Puppet 101
REF:
@[https://blogs.sequoiainc.com/puppet-101-part-1/]
@[https://blogs.sequoiainc.com/puppet-101-part-2/]

master/agent architecture:
- PuppetºMasterº: - server holding all the configuration.

- PuppetºAgent º: - Installed on each "target" server,    ºAgent Certificateº: ← - signed Master's CA.
                    runs @ regular intervals:              ─────────────────     - Used for secure network
                  - Query desired state and if needed      -ºnode-nameº            communic between Master←→Agent
                    (configuration drift) update state.      ^
                                                             |
                                                  Ex. web01.myDomain.com (wildcards allowed)
                                                  Assigning/managing node names Rºcan be trickyº
                                                  in the cloud since DNS change frequently.
┌─────────────────────────────────────────────────────────┬────────────────────────────────────────────────────────────┐
│OºRESOURCEº                                              │ BºCLASSESº                                                 │
│(Concrete resource that must be present in server)       │ - A Class is a group of Resources that                     │
│   ┌─── user/file/package/...that must be present in     │   belong together conceptually,                            │
│   │    server (or custom resource)                      │   fulfilling a given instalation-                          │
│   v                         │  Ex:                      │   -requirement role.                                       │
│OºTYPEº{ TITLE ← must unique │Oºuserº{ 'jbar':           │ - variables can be defined to customize                    │
│      ATTRIBUTE, per Node    │   ensure  =˃ present,     │   target environments.                                     │
│      ATTRIBUTE,             │   home    =˃ '/home/jbar',│   (test,acceptance,pre,pro,..)                             │
│      ATTRIBUTE,             │   shell   =˃ '/bin/bash', │ - inheritance is allowed to save                           │
│      ...                    │  }                        │   duplicated definition                                    │
│   }  ^                          ^          ^            │                                                            │
│      |                          |          |            │                       │ Ex:                                │
│      key =˃ value              key       value          │ class BºCLASS_NAMEº { │ class Bºusersº {                   │
│                                                         │     RESOURCE          │     user { 'tomcat':               │
│$ puppet resource Oºuserº                                │     RESOURCE          │         ensure   =˃ present,       │
│         ^^^^^^^^^^^^^^^                                 │ }                     │         home     =˃ '/home/jbauer',│
│         Returns all users                               │                       │         shell    =˃ '/bin/bash',   │
│         (not just those configured/installed by Puppet) │                       │     }                              │
│         (same behaviour applies to any other resource)  │                       │     user { 'nginx':                │
│                                                         │                       │         ...                        │
│                                                         │                       │     }                              │
│                                                         │                       │     ...                            │
│                                                         │                       │ }                                  │
│                                                         │                       │ include Bºusersº                   │
│                                                         │                       │ ^^^^^^^^^^^^^^^^                   │
│                                                         │                       │ ºDon't forgetº. Otherwise class is │
│                                                         │                          ignored                           │
├─────────────────────────────────────────────────────────┼────────────────────────────────────────────────────────────┤
│QºNODE ("Server")º                                       │                                                            │
│- bundle of: [ class1, class2, .. , resource1,  ...]     │           QºNODEº 1 ←─────────→ NBºClassº                  │
│                                                         │                  1                1                        │
│                        must match Agent-Certificate.name│                   \              /                         │
│     SYNTAX               │Ex:     ┌───────┴────────┐    │                    \            /                          │
│node Q"NAME" {            │node Qº"web01.myDomain.com"º {│                     N          N                           │
│    include BºCLASS01º    │                              │                     OºResourceº                            │
│    include BºCLASS02º    │    include Bºtomcatº         │                                                            │
│    include Bº...º        │    include Bºusersº          │                                                            │
│    include OºRESOURCE01º │                              │ YºMANIFESTº: 0+QºNODEsº, 0+BºClassesº, 0+OºResourcesº      │
│    include OºRESOURCE02º │  Oºfileº{ '/etc/app.conf'    │                                                            │
│    include Oº...º        │        ...                   │ GºMODULEº: 1+Manifests, 0+supporting artifacts             │
│}                         │    }                         │            ^                                               │
│                          │}                             │ ($PUPPET/environments/$ENV/"module"/manifest/init.pp )     │
│                                                         │                                                            │
│The special name Qº"default"º will be applied to any     │  ºSITE MANIFESTº: Separated Manifests forming the catalog  │
│server (used for example to apply common security,       │                   (Read by the Puppet Agent)               │
│packages,...)                                            │                                                            │
└─────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────┐
│YºMANIFESTSº:                                            │GºMODULESº                                                   │
│*.pp file defining OºResourcesº, BºClassesº and QºNodesº │- reusable bundle of [ 1+ Manifests , "support file list" ]  │
│                                                         │- installed on Puppet Master                                 │
│ ┌────────────────────────────────────────────────       │  (can also be installed from a central repository using     │
│ │example.pp Manifest:                                   │   $ puppet module ... )                                     │
│ │ // variable declarations, logic constructs, ...       │- referenced by name in other Modules or in Manifests.       │
│ │                                                       │- Layout of                                                  │
│ │                                                       │  ${PUPPET}/environments/${ENVIRONMENT}/modules/ºmodule01º   │
│ │Oºuser{ 'jbauer':º                                     │                                name must                    │
│ │      ensure      =˃ present,                          │ ºmodule01º  ←───────────────── match                        │
│ │      home        =˃ '/home/jbauer',                   │  ├─ manifests                  vvvvvvvv                     │
│ │      shell       =˃ '/bin/bash',                      │  │  ├ºinit.ppº ←········ classºmodule01º{  │                │
│ │  }                                                    │  │  │                      ...             │                │
│ │                                                       │  │  │                    }                 │                │
│ │Bºclass 'security'º{                                   │  │  │                                      │                │
│ │      ...                                              │  │  ├ class01.pp (opt)←· class class01 {   │                │
│ │  }                                                    │  │  │                       ...                             │
│ │                                                       │  │  │                    }                                  │
│ │  include security                                     │  │  └ ...                      ^                            │
│ │                                                       │  │                       module01@init.pp   can be used as  │
│ │Bºclass 'tomcat'º{                                     │  ├─ files        (opt)   include module01                   │
│ │  }                                                    │  ├─ templates    (opt)   class01@class01.pp can be used as  │
│ │                                                       │  ├─ lib          (opt)   include module01::class01          │
│ │Qºnodeº'web01.example.com' {                           │  ├─ facts.d      (opt)   Retrieve storage,CPU,...before  ←─┐│
│ │      includeBºtomcatº                                 │  │                       exec. the catalog                 ││
│ │      ...                                              │  │@[https://puppet.com/docs/puppet/latest/core_facts.html] ││
│ │                                                       │  ├─ examples     (opt)                                     ││
│ │  }                                                    │  └─ spec         (opt)                                     ││
└─────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────┼┘
                                                                               ┌───────────────────────────────────────┘
┌───────────────────────────────────────────────────────────────────────────┐  Example custom "facter":
│YºSITE (MAIN) MANIFESTº                                                    │  $ cat ./modules/basic/facts/lib/facter/common.rb
│- area of Puppet configurationºseparated from Modulesº.                    │  → Facter.add("hostnamePart01") do
│- By default, all Manifests contained in                                   │  →   setcode do
│    º${PUPPET}/environments/${ENVIRONMENT}/manifestsº                      │  →     h = Facter.value(:hostname)
│  (vs ${PUPPET}/environments/${ENVIRONMENT}/modules/mod1...                │  →     h_a = h.split("-")[0].tr("0-9", "").chomp
│      ${PUPPET}/environments/${ENVIRONMENT}/modules/mod2...)               │  →   end
│- Its content is concatenated and executed as the Site Manifest.           │  → end
│-ºstarting pointºfor calculating the ºPUPPET catalogº ,                    │  → ...
│  i.e., the "sum total of applicable configuration" for a node.            │  →
│- This is the information queried by the Puppet Agent instaled on each     │
│  "satellite" server.                                                      │
│  - any standalone Resource or Class declarations is automatically applied │
│  - matching Nodes (Node_name vs Agent Certificate Name) are also applied  │
└───────────────────────────────────────────────────────────────────────────┘

ºADVANCED TOPICSº (TODO)
 - controlling the Resources order  execution
 - transient cloud servers
 - auto-signing and node name wildcards
 - ...
Bolt
OOSS orchestration tool automating manual work to maintain infrastructure.
 For example, you can use Bolt to patch and update systems, 
troubleshoot servers, deploy applications, or stop 
and restart services. Bolt can be installed on your local workstation 
and connects directly to remote targets with SSH or WinRM, so you are 
not required to install any agent software.
Vagrant
External Links
- Vagrant Docs
- CLI Reference

- Getting Started
- Providers list
- Boxes Search
- Networking
Boxes
Instead of building a virtual machine from scratch, which would be a
slow and tedious process, Vagrant uses a base image to quickly clone a
virtual machine. These base images are known as "boxes" in Vagrant, and
specifying the box to use for your Vagrant environment is always the first
step after creating a new Vagrantfile.

Command List
vagrant "COMMAND" -h
$ vagrant  # Most frequently used commands                                   | $ vagrant list-commands # (including rarely used command)
Usage: vagrant [options]  []                                  |
Common commands:                                                             |
box           manages boxes: installation, removal, etc.                     | box             manages boxes: installation, removal, etc.
destroy       stops and deletes all traces of the vagrant machine            | cap             checks and executes capability
global-status outputs status Vagrant environments for this user              | destroy         stops and deletes all traces of the vagrant machine
halt          stops the vagrant machine                                      | docker-exec     attach to an already-running docker container
help          shows the help for a subcommand                                | docker-logs     outputs the logs from the Docker container
init          initializes a new Vagrant environment by creating a Vagrantfile| docker-run      run a one-off command in the context of a container
login         log in to HashiCorp's Vagrant Cloud                            | global-status   outputs status Vagrant environments for this user
package       packages a running vagrant environment into a box              | halt            stops the vagrant machine
plugin        manages plugins: install, uninstall, update, etc.              | help            shows the help for a subcommand
port          displays information about guest port mappings                 | init            initializes a new Vagrant environment by creating a Vagrantfile
powershell    connects to machine via powershell remoting                    | list-commands   outputs all available Vagrant subcommands, even non-primary ones
provision     provisions the vagrant machine                                 | login           log in to HashiCorp's Vagrant Cloud
push          deploys code in this environment to a configured destination   | package         packages a running vagrant environment into a box
rdp           connects to machine via RDP                                    | plugin          manages plugins: install, uninstall, update, etc.
reload        restarts vagrant machine, loads new Vagrantfile configuration  | port            displays information about guest port mappings
resume        resume a suspended vagrant machine                             | powershell      connects to machine via powershell remoting
snapshot      manages snapshots: saving, restoring, etc.                     | provider        show provider for this environment
ssh           connects to machine via SSH                                    | provision       provisions the vagrant machine
ssh-config    outputs OpenSSH valid configuration to connect to the machine  | push            deploys code in this environment to a configured destination
status        outputs status of the vagrant machine                          | rdp             connects to machine via RDP
suspend       suspends the machine                                           | reload          restarts vagrant machine, loads new Vagrantfile configuration
up            starts and provisions the vagrant environment                  | resume          resume a suspended vagrant machine
validate      validates the Vagrantfile                                      | rsync           syncs rsync synced folders to remote machine
version       prints current and latest Vagrant version                      | rsync-auto      syncs rsync synced folders automatically when files change
                                                                             | snapshot        manages snapshots: saving, restoring, etc.
                                                                             | ssh             connects to machine via SSH
                                                                             | ssh-config      outputs OpenSSH valid configuration to connect to the machine
                                                                             | status          outputs status of the vagrant machine
                                                                             | suspend         suspends the machine
                                                                             | up              starts and provisions the vagrant environment
                                                                             | validate        validates the Vagrantfile
                                                                             | version         prints current and latest Vagrant version
Quick
Setup
$ mkdir vagrant_getting_started
$ cd vagrant_getting_started
$ vagrant init # creates new Vagrantfile
3 Virt.Box
Cluster Ex
VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  # Use the same key for each machine
  config.ssh.insert_key = false

  config.vm.define "vagrant1" do |vagrant1|
    vagrant1.vm.box = "ubuntu/xenial64"
    vagrant1.vm.network "forwarded_port", guest: 80, host: 8080
    vagrant1.vm.network "forwarded_port", guest: 443, host: 8443
  end
  config.vm.define "vagrant2" do |vagrant2|
    vagrant2.vm.box = "ubuntu/xenial64"
    vagrant2.vm.network "forwarded_port", guest: 80, host: 8081
    vagrant2.vm.network "forwarded_port", guest: 443, host: 8444
  end
  config.vm.define "vagrant3" do |vagrant3|
    vagrant3.vm.box = "ubuntu/xenial64"
    vagrant3.vm.network "forwarded_port", guest: 80, host: 8082
    vagrant3.vm.network "forwarded_port", guest: 443, host: 8445
  end
end

# -º- mode: ruby -º-
# vi: set ft=ruby :

VAGRANTFILE_API_VERSION = "2"

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
        # Use the same key for each machine
        config.ssh.insert_key = false

        config.vm.define "vagrant1" do |vagrant1|
                vagrant1.vm.box = "ubuntu/xenial64"
                vagrant1.vm.provider :virtualbox do |v|
                        v.customize ["modifyvm", :id, "--memory", 1024]
                end
                vagrant1.vm.network "forwarded_port", guest: 80, host: 8080
                vagrant1.vm.network "forwarded_port", guest: 443, host: 8443
                vagrant1.vm.network "private_network", ip: "192.168.0.1"
                # Provision through custom bootstrap.sh script
                config.vm.provision :shell, path: "bootstrap.sh"
        end
        config.vm.define "vagrant2" do |vagrant2|
                vagrant2.vm.box = "ubuntu/xenial64"
                vagrant2.vm.provider :virtualbox do |v|
                        v.customize ["modifyvm", :id, "--memory", 2048]
                end
                vagrant2.vm.network "forwarded_port", guest: 80, host: 8081
                vagrant2.vm.network "forwarded_port", guest: 443, host: 8444
                vagrant2.vm.network "private_network", ip: "192.168.0.2"
        end
        config.vm.define "vagrant3" do |vagrant3|
                vagrant3.vm.box = "ubuntu/xenial64"
                vagrant3.vm.provider :virtualbox do |v|
                        v.customize ["modifyvm", :id, "--memory", 2048]
                end
                vagrant3.vm.network "forwarded_port", guest: 80, host: 8082
                vagrant3.vm.network "forwarded_port", guest: 443, host: 8445
                vagrant3.vm.network "private_network", ip: "192.168.0.3"
        end
end
Terraform
External Links
- @[https://learn.hashicorp.com/terraform/getting-started/install.html]
- @[https://www.terraform.io/intro/use-cases.html]
  - Heroku App Setup
  - Multi-Tier Applications
  - Self-Service Clusters
  - Software Demos
  - Disposable Environments
  - Software Defined Networking
  - Resource Schedulers
  - Multi-Cloud Deployment
Terraform 101
$ mkdir project01
$ cd project01
$ vim libvirt.tf like:      Oº# ← STEP 1:º Create tf file
   # Alt 1: Local kvm
 Oºprovider "libvirt" {    º         ← kvm/libvirt provider (Check KVM setup for more info)
 Oº  uri = "qemu:///system"º
 Oº}                       º

   # Alt 2: Remote provider
   #provider "libvirt" {
   #  alias = "server2"
   #  uri   = "qemu+ssh://root@192.168.100.10/system"
   #}

   resource "libvirt_volume" "centos7-qcow2" {
     name = "centos7.qcow2"
     pool = "default"
     source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"
     #source = "./CentOS-7-x86_64-GenericCloud.qcow2"
     format = "qcow2"
   }

# Adding default user STEP 1 {{{
   # Our centos qcow2 does not provide any default user/pass to allow logins.
   #ºlibvirt_cloudinit_disk resourceº is used to "bootstrap" user data to
   # the instance.

   data "template_file" "user_data" {
     template = "${file("${path.module}/cloud_init.cfg")}"
   }

   # Use CloudInit to add the instance
  ºresource "libvirt_cloudinit_disk"º"commoninit" {
     name = "commoninit.iso"
     user_data      = "${data.template_file.user_data.rendered}"
   }
# }}}

   resource "libvirt_domain" "db1" {
     name   = "db1"
     memory = "1024"
     vcpu   = 1

     network_interface {
       network_name = "default"
     }

     disk {
       volume_id = libvirt_volume.centos6-qcow2.id
     }

# Adding default user STEP 2 {{{
     cloudinit = "${libvirt_cloudinit_disk.commoninit.id}"
# }}}

     console {
       type = "pty"
       target_type = "serial"
       target_port = "0"
     }

     graphics {
       type = "spice"
       listen_type = "address"
       autoport = true
     }
   }




# Define KVM domain to create
resource "libvirt_domain" "db1" {


  console {
    type = "pty"
    target_type = "serial"
    target_port = "0"
  }

  graphics {
    type = "spice"
    listen_type = "address"
    autoport = true
  }
}

# Output Server IP
output "ip" {
  value = "${libvirt_domain.db1.network_interface.0.addresses.0}"
}


$ vim cloud_init.cfg l       Oº# ← STEP 1.2:º Create cloud_init.cf file
  #cloud-config                               Needed when (qcow2) image doesn't
  # vim: syntax=yaml                          provide initial user/pass to log in
  #
  # ***********************
  #   ---- for more examples look at: ------
  # ---> https://cloudinit.readthedocs.io/en/latest/topics/examples.html
  # ******************************
  #
  # This is the configuration syntax that the write_files module
  # will know how to understand. encoding can be given b64 or gzip or (gz+b64).
  # The content will be decoded accordingly and then written to the path that is
  # provided.
  #
  # Note: Content strings here are truncated for example purposes.
  ssh_pwauth: True
  chpasswd:
    list: |
       root: StrongPassword
    expire: False

  users:
    - name: jmutai # Change me
      ssh_authorized_keys:
        - ssh-rsa AAAAXX #Chageme
      sudo: ['ALL=(ALL) NOPASSWD:ALL']
      shell: /bin/bash
      groups: wheel

      This will set root password to StrongPassword
      Add user named jmutai with specified Public SSH keys
      The user will be added to wheel group and be allowed to run sudo commands without password.


$ terraformºinitº           Oº# ← STEP 2:º Init
→ Initializing provider plugins…
→ Terraform has been successfully initialized!
→ You may now begin working with Terraform. Try runningº"terraform plan"ºto see any changes
→ that are required for your infrastructure. All Terraform commands should now work.
→
→ If you everºset or change modulesºor backend configuration for Terraform,
→ ºrerun this command to reinitializeºyour working directory. If you forget, other
→ commands will detect it and remind you to do so if n



$ terraformºplanº           Oº# ← STEP 3:º Check needed changes
                              #           Short of "dry-run"
→  Refreshing Terraform state in-memory prior to plan...
→  ....
→  An execution plan has been generated ...
→  ...
→    # libvirt_domain.db1 will be created
→    + resource "libvirt_domain" "db1" {
→        + ...
→        +ºid     º    = (known after apply)
→        +ºmachineº    = (known after apply)
→        + memory      = 1024
→        + ...
→        + console { ...  }
→        + disk {  ...  }
→
→        + graphics {
→            + autoport       = true
→            + listen_address = "127.0.0.1"
→            + listen_type    = "address"
→           º+ type           = "spice"º
→          }
→
→        + network_interface { ...  }
→      }
→
→    # libvirt_volume.centos7-qcow2 will be created
→    + resource "libvirt_volume" "centos7-qcow2" {
→        + ...
→        + pool   = "default"
→        + size   = (known after apply)
→        + source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"
→      }
RºNote:º You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

$ terraformºapplyº           Oº# ←  STEP 4:º "Execute" plan
→ libvirt_volume.centos7-qcow2: Creating...
→   format: "" =˃ "qcow2"
→   ...
→ libvirt_volume.centos7-qcow2:ºCreation completeºafter 8s (ID:º/var/lib/libvirt/images/db.qcow2º)
→ libvirt_domain.db1: Creating...
→   arch:                             "" =˃ ""
→   ...
→  ºrunning:                          "" =˃ "true"º
→   vcpu:                             "" =˃ "1"
→ libvirt_domain.db1:ºCreation completeºafter 0s (ID: e5ee28b9-e1da-4945-9eb0-0cda95255937)
→ Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

$ sudoºvirsh  listº                  Oº# ← STEP 5.1:º Post creation check
 Id   Name   State
----------------------
 ...
 7   ºdb1    runningº

$ sudoºvirsh net-dhcp-leases defaultº  # ←  OºSTEP 5.2:º Post creation check

 Expiry        MAC      Protocol IP       Hostname Client ID
 Time          address           address           or DUID
--------------------------------------------------------------
 .. 16:11:18   52:54:.. ipv4     192...      -     -
 .. 15:30:18   52:54:.. ipv4     192...     rhel8  ff:61:..:d1

$  ping -c 1 192....                Oº# ← STEP 5.3:º Post creation check

Destroy
$ cd .../project01
$ terraformºdestroyº


KVM setup
REF: @[https://computingforgeeks.com/how-to-provision-vms-on-kvm-with-terraform/]
RºWARN:ºKVM/libvirt provider is NOT officially supported by Hashicorp
        Maintained by Duncan Mac-Vicar P and others.

ºKVM PRE-SETUP:º
  step 1: Install KVM hypervisor
          (consult Linux distro)
  step 2: check install step 1
  $ sudo systemctl start libvirtd
  $ sudo systemctl enable libvirtd

  step 3: (Debian/Ubuntu/...?)
  $ sudo modprobe vhost_net  ← Enable vhost-net kernel module
  $ echo vhost_net | sudo tee -a /etc/modules

ºTerraform KVM  provider:º
(libvirt actually):
  Step 3: Install

$ mkdir ~/.terraform.d/plugins # ←  This dir will store Terraform Plugins.
$ cp terraform-provider-libvirt  ~/.terraform.d/plugins
     ^^^^^^^^^^^^^^^^^^^^^^^^^^
     Downloaded from:
@[https://github.com/dmacvicar/terraform-provider-libvirt/releases]



KVM Troubleshooting
- Error similar to Can not read /var/lib/libvirt/images/...qcow2 on Ubuntu 18.02
  Looks to be reelated to Appmour according to:
 @[https://github.com/jedi4ever/veewee/issues/996#issuecomment-497976612]
 @[https://github.com/dmacvicar/terraform-provider-libvirt/issues/97]
 """ ... For testing purpose, I simply editº/etc/libvirt/qemu.confº  setting:
 security_driver = "none"
 """
cloud-init
https://cloudinit.readthedocs.io/en/latest/
"""industry standard multi-distribution method for
  cross-platform cloud instance initialization. It is supported across all major
  public cloud providers, provisioning systems for private cloud infrastructure,
  and bare-metal installations."""

Cloud-init has support across all major Linux distributions and FreeBSD:
Ubuntu ,SLES/openSUSE ,RHEL/CentOS ,Fedora ,Gentoo Linux ,Debian ,ArchLinux ,FreeBSD

Clouds

supported public clouds:
    AWS, Azure, GCP, Oracle Cloud, Softlayer, Rackspace Pub.Cloud,
    IBM Cloud, Digital Ocean, Bigstep, Hetzner, Joyent, CloudSigma,
    Alibaba Cloud, OVH ,OpenNebula ,Exoscale ,Scaleway ,CloudStack,
    AltCloud, SmartOS

supported private clouds:
    Bare metal installs , OpenStack , LXD, KVM, Metal-as-a-Service (MAAS)
Regula
@[https://www.helpnetsecurity.com/2020/01/16/fugue-regula/]
Fugue open sources Regula to evaluate Terraform for security misconfigurations 
and compliance violations
QA/Testing
Kayenta Canary Testing
@[https://github.com/spinnaker/kayenta]
- Kayenta platform:  Automated Canary Analysis (ACA)
SonarQube (QA)
Apply quality metrics to source-code
Selenium
Browser test
automation
Charles Proxy
@[https://www.charlesproxy.com/]
Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a 
developer to view all of the HTTP and SSL / HTTPS traffic between their machine 
and the Internet. This includes requests, responses and the HTTP headers (which 
contain the cookies and caching information).