External Links - @[https://git-scm.com/book/en/v2] - @[https://learnxinyminutes.com/docs/git/] - @[https://learngitbranching.js.org/?demo] - Related: See UStore: Distributed Storage with rich semantics!!! @[https://arxiv.org/pdf/1702.02799.pdf]
Who-is-who (Forcibly incomplete but still quite pertinent list of core people and companies) - Linus Torvald: L.T. initiated the project to fix problems with distributed development of the Linux Kernel. - Junio C. Hamano: lead git maintainer (+8700 commits) @[https://git-blame.blogspot.com/]
"Full Journey" Bº#########################################################################º Bº# Non-normative Git server setup for N projects with M teams of L users #º Bº#########################################################################º · CONTEXT: · ssh access has been enabled to server (e.g: $º$ sudo apt install openssh-server º ) · Ideally ssh is protected. See for example: @[../DevOps/linux_administration_summary.html#knockd_summary] (Alternatives include using GitHub,GitLab,BitBucket, AWS/Scaleway/Azure/... ) · We want to configure linux users and groups to match a "permissions layout" similar to: GIT_PROJECT1 ···→ Linux Group teamA ····→ R/W permssions to /var/lib/teamA/project01 *1 GIT_PROJECT2 ···→ Linux Group teamA ····→ R/W permssions to /var/lib/teamA/project02 └───────┬────────┘ alice, bob,... GIT_PROJECT3 ···→ Linux Group teamB ····→ R/W permssions to /var/lib/teamB/project03 GIT_PROJECT4 ···→ Linux Group teamB ····→ R/W permssions to /var/lib/teamB/project04 └───────┬────────┘ franc, carl, ... GIT_PROJECT5 ···→ ... *1: setup /var/lib/teamA/project01 like: $º$ sudo mkdir -p /var/lib/teamA/project01 º ← create directory $º$ cd /var/lib/teamA/project01 º $º$ sudo git init --bare º ← INIT NEW GIT BARE DIRECTORY !!!! (GIT OBJECT DATABASE for commits/trees/blogs, ...) $º$ DIR01=/var/lib/teamA/project01 º $º$ sudo find ${DIR01} -exec chown -R nobody:teamA {} \; º ← Fix owner:group_owner for dir. recursively $º$ sudo find ${DIR01} -type d -exec chmod g+rwx {} \; º ← enable read/wr./access perm. for dirs. $º$ sudo find ${DIR01} -type f -exec chmod g+rw {} \; º ← enable read/write perm. for files Finally add the desired linux-users to the 'teamA' linux-group at will. More info at: @[../DevOps/linux_administration_summary.html#linux_users_groups_summary] ) Bº######################################################################º Bº# Non-normative ssh client access to Git remote repository using ssh #º Bº######################################################################º • PRE-SETUP) Edit ~/.bashrc to tune the ssh options for git ading lines similar to: + GIT_SSH_COMMAND="" + GIT_SSH_COMMAND="${GIT_SSH_COMMAND} -oPort=1234 " ← connect to port 1234 (22 by default) + GIT_SSH_COMMAND="${GIT_SSH_COMMAND} -i ~/.ssh/privKeyServer.pem " ← private key to use when authenticating to server + GIT_SSH_COMMAND="${GIT_SSH_COMMAND} -u myUser1 " ← ssh user and git user are the same when using ssh. + GIT_SSH_COMMAND="${GIT_SSH_COMMAND} ..." ← any other suitable ssh options (-4, -C, ...) Optionally add your GIT URL like: $º+ export GIT_URL="myRemoteSSHServer" º $º+ export GIT_URL="${GIT_URL}/var/lib/my_git_team º ← Must match path in server (BASE_GIT_DIR) $º+ export GIT_URL="${GIT_URL}/ourFirstProject" º ← Must match name in server (PROJECT_NAME) • PRE-SETUP) ModifyºPS1 promptº(Editing $HOME/.bashrc) to look like: PS1="\h[\$(git branch 2˃/dev/null | grep ^\* | sed 's/\*/branch:/')]" $( ... ) == exec. ... as script. └───────────── ºshow git branchº ───────────────────┘ + Assign STDOUT to var. export PS1="${PS1}@\$(pwd |rev| awk -F / '{print \$1,\$2}' | rev | sed s_\ _/_) \$ " (bash "syntax sugar") └────────────── show current and parent dir. only ────────┘ host1 $ ← PROMPT BEFORE host01[branch: master]@dir1/dir2 ← PROMPT AFTER • Finally clone the repo like: $º$ git clone GºmyUser1º@${GIT_URL} º ← execution will warn about cloning empty directory. $º$ cd ourFirstProject º ← move to local clone. $º$ ... º ← Continue with standards git work flows. $º$ git add ... º $º$ git commit ... º Bº#############################º Bº# COMMON (SIMPLE) GIT FLOWS #º Bº#############################º ┌─ FLOWS 1: (Simplest flow) no one else pushed changes before our push.──────────────────────────────────── │ ºNO CONFLICTS CAN EXISTS BETWEEN LOCAL AND REMOTE WORKº │ local ─→ git status ─→ git add . ─→ºgit commitº··································→ºgit push \º │ edit └───┬────┘ └───┬────┘ └───┬────┘ ºorigin featureXº │ │ add file/s │ └─────┬─────────┘ │ display changes to next commit │ push to featureX branch │ pending to commit commit new file history at remote repository. ┌─ FLOWS 2: someone else pushed changes before us, ─────────────────────────────────────────────────────── │ BUT THERE ARE NO CONFLICTS (EACH USER EDITED DIFFERENT FILES) │ local → git status → git add . →ºgit commitº─→ git pull ·························→ºgit push \º │ edit └───┬───┘ ºorigin featureXº │ • if 'git pull' is ommitted before 'git push', git will abort warning about remote changes │ conflicting with our local changes. 'git pull' will download remote history and since │ different files have been edited by each user, an automatic merge is done (local changes │ + any other user's remote changes). 'git pull' let us see other's people work locally. ┌─ FLOW 3: someone else pushed changes before our push, ────────────────────────────────────────────────── │ BUT THERE ARE CONFLICTS (EACH USER EDITED ONE OR MORE COMMON FILES) │ local → git status → git add . →ºgit commitº─→ git pull ┌→ git add → git commit →ºgit push \º │ edit ↓ ↑ └──┬──┘ ºorigin featureXº │ "fix conflicts" mark conflicts as │ └─────┬──────┘ resolved │ manually edit conflicting changes (Use git status to see conflicts) ┌─ FLOW 4: Amend local commit ───────────────────────────────────────────────────────────────────────────── │ local → git status → git add . →ºgit commitº → git commit ─amend ...→ git commit →ºgit push \º │ edit ºorigin featureXº ┌─ GIT FLOW: Meta-flow usingºWIDELY ACCEPTED BRANCHES RULESº ────────────────────────────────────────────── │ to manage common issues when MANAGING AND VERSIONING SOFTWARE RELEASES │ ┌───────────────────┬──────────────────────────────────────────────────────────────────────────────────┐ │ │ Standarized │ INTENDED USE │ │ │ branch names │ │ │ ├───────────────────┼──────────────────────────────────────────────────────────────────────────────────┤ │ │ feature/... │ Develop new features here. Once developers /QA tests are "OK" with new code │ │ │ │ merge back into develop. If asked to switch to another task just commit changes │ │ │ │to this branch and continue later on. │ │ ├───────────────────┼──────────────────────────────────────────────────────────────────────────────────┤ │ │develop │ RELEASE STAGING AREA: Merge here feature/... completed features NOT YET been │ │ │ │ released in other to make them available to other dev.groups. │ │ │ │ Branch used for QA test. │ │ ├───────────────────┼──────────────────────────────────────────────────────────────────────────────────┤ │ │release/v"X" │ stable (release tagged branch). X == "major version" │ │ ├───────────────────┼──────────────────────────────────────────────────────────────────────────────────┤ │ │hotfix branches │ BRANCHES FROM A TAGGED RELEASE. Fix quickly, merge to release and tag in release │ │ │ │ with new minor version. (Humor) Never used, our released software has no bugs │ │ └───────────────────┴──────────────────────────────────────────────────────────────────────────────────┘ │ │ • master branch ← "Ignore". │ ├─ • develop (QA test branch) ·················• merge ·················· • merge ┐ ...• merge ┐ │ │ │ ↑ feat.1 ↑ · ↑ · │ │ ├→ • feature/appFeature1 • commit • commit ─┘ ························│·······↓ ... ┘ · │ │ │ (git checkout -b) │ · · │ │ │ │ · · │ │ ├→ • feature/appFeature2 • commit • commit • commit • commit • commit ─┘ · ┌··········┘ │ │ · ·(QA test "OK") │ │ ┌··········←·(QA Test "OK" in develop, ready for release)··┘ · │ │ ... v v │ ├─ • release/v1 ·······• merge⅋tag ┐ • merge⅋tag ┐ • merge⅋tag ······• merge⅋tag │ │ v1.0.0 · ↑ v1.0.1 · ↑ v1.0.2 v1.1.0 │ │ *1 ↓ · *1 · · *1 *1 │ │ └ hotfix1 • └ hotfix2 • │ │ (git checkout -b) │ ├─ • release/v2 ···· *1 Each merge into release/v"N" branch can trigger deployments to acceptance and/or production enviroments triggering new Acceptance tests. Notice also that deployments can follow different strategies (canary deployments to selected users first, ...).
RECIPES ºPLAYING WITH BRANCHESº $ git checkout newBranch ← swith to local branch (use -b to create if not yet created) $ git branch -av ← List (-a)ll existing branches $ git branch -d branchToDelete ← -d: Delete branch $ git checkout --track "remote/branch" ← Create new tracking branch (TODO) $ git checkout v1.4-lw ← Move back to (DETACHED) commit. ('git checkout HEAD' to reattach) $ git remote update origin --prune ← Update local branch to mirror remote branches in 'origin' ºVIEW COMMIT/CHANGES HISTORYº $ git log -n 10 ← -n 10. See only 10 last commits. $ git log -p path_to_file ← See log for file with line change details (-p: Patch applied) $ git log --all --decorate \ ← PRETTY BRANCH PRINT Alt.1 --oneline --graph REF @[https://stackoverflow.com/questions/1057564/pretty-git-branch-graphs] $ git log --graph --abbrev-commit \ ← PRETTY BRANCH PRINT Alt.2 --decorate --date=relative --all ºUPDATE ALL REMOTE BRANCHES TO LOCAL REPOº (REF: https://stackoverflow.com/questions/10312521/how-to-fetch-all-git-branches= for remote in `git branch -r`; # ← add remote branches on server NOT yet tracked locally do git branch \ # (pull only applies to already tracked branches) --track ${remote#origin/} $remote; done $ git fetch --all # ← == git remote update. # updates local copies of remote branches # probably unneeded?. pull already does it. # It is always SAFE BUT ... # - It will NOT update local branches (tracking remote branches) # - It will NOT create local branches (tracking remote branches) $ git pull --all # ← Finally update all tracked branches. ºTAGS:º $ git tag ← List tags → v2.1.0-rc.2 → ... $ git tag -a v1.4 -m "..." 9fceb.. ← Create annotated tag (recomended), stored as FULL OBJECTS. It contains tag author/mail/date, tagging message (-m). can be checksummed and optionally SIGNED/VERIFIED with GPG. if commit ommited (9fceb..) HEAD is used) $ git tag v1.4-lw ← Create lightweight tag ("alias" for commit-checksum) ºSHARING/PUSHING TAGSº RºWARNº: 'git push' does NOT push tags automatically $ git push origin v1.5 ← Share 'v1.5' tag to remote 'origin' repo $ git push origin --tags ← Share all tags $ git tag -d v1.4-lw ← Delete local tag (remote tags will persist) $ git push origin --delete v1.4-lw ← Delete remote tag. Alt 1 $ git push origin :refs/tags/v1.4-lw ← Delete remote tag. Alt 2 └────────────────┴─── ← null value before the colon is being pushed to the remote tag name, effectively deleting it. $ git show-ref --tags ← show mapping tag ←→ commit → 75509731d28d... refs/tags/v2.1.0-rc.2 → 8fc0a3af313d... refs/tags/v2.1.1 → ... ºREVERTING CHANGESº $ git reset --hard HEAD~1 ← revert to last-but-one (~1) local commit (Not yet pushed) (effectively removing last commit from local history) $ git checkout path/fileN ← revert file not yet "git-add"ed or removed from FS to last commited ver. $ git checkout HEAD^ -- path/... ← revert commited file to last-but-one commit version $ git revert ${COMMIT_ID} ← add new commit cancelling changes in $COMMIT_ID. Previous commit is not removed from history. Both are kept on history. $ git clean -n ← Discard new "git-added" files. -n == -dry-run, -f to force $ git reset path/fileA ← Remove from index file "git-added" by mistake (with 'git add .') (probably must be added to .gitignore) $ git checkout N -- path1 ← Recover file at commit N (or tag N) $ git checkout branch1 -- path1 ← Recover file from branch1 origin/branch1 to recover from upstream -vs local- branch. ºCLONING REMOTESº $ git clone --depth=1 \ ← Quick/fast-clone (--depth=1) with history truncated to last N commits. ºVery useful in CI/CD tasks.º --single-branch \ ← Clone only history leading to tip-of-branch (vs cloning all branches) (implicit by previous --depth=... option) --branch '1.3' \ ← branch to clone (defaults to HEAD) ${GIT_URL} To clone submodules shallowly, use also --shallow-submodules.
Track code changes REF: @[https://git-scm.com/book/en/v2/Appendix-C:-Git-Commands-Debugging] Methods to track who changed and/or when a change(bug) was introduced include: •ºgit bisectº: find first commit introducing a change(bug, problem, ...) throughºautomatic binary searchº. @[https://git-scm.com/book/en/v2/Git-Tools-Debugging-with-Git#_binary_search] · git-blame helps to find recently introduced bugs. · git-bisect helps find bugs digged many commits down in history. · Ussage example: ºMANUAL BISECT SEARCHº $ git bisect start ← start investigating issue. (run tests) $ git bisect bad ← tell git that current commit is broken (run tests) $ git bisect good v1.0 ← tell git that current commit is OK Bisecting: 6 revisions↲ ← Git counts about 12 commits between left to test after this "good" and "bad" and checks out middle one (run tests) ... b04... is 1st bad commit ← repeat git bisect good/bad until reaching 1st bad commit. $ git bisect reset ← DON'T FORGET: reset after finding commit. ºAUTOMATING BISECT SEARCHº $ git bisect start HEAD v1.0 $ git bisect run test.sh ← test.sh must return 0 for "OK" results non-zero otherwise. •ºgit blameº: annotates lines-of-files with: @[https://git-scm.com/book/en/v2/Git-Tools-Debugging-with-Git#_file_annotation] $ git blame -L 69,82 -C path2file ← display last commit+committer for a given line -C: try to figure out where snippets of code came from (copied/file moved) b8b0618cf6fab (commit_author1 2009-05-26 ... 69) ifeq b8b0618cf6fab (commit_author1 2009-05-26 ... 70) ^1da177e4c3f4 (commit_author2 2005-04-16 ... 71) endif ^ ^^ prefix '^' marks initial commit for line line •ºgit grepº: find any string/regex in any file in any commit, working directory (default) or index. @[https://git-scm.com/book/en/v2/Git-Tools-Searching#_git_grep] • ºMuch faster than standard 'grep' UNIX command!!!º $ git grep -n regex01 ← display file:line matching regex01 in working dir. -n/--line-number: display line number Use -p / --show-functions to display enclosing function º(Quick way to check where something is being called from)º $ git grep --count regex01 ← Summarize file:match_count matching regex01 in working dir. $ git grep ← display file:line matching -n -e '#define' --and \ ← '#define' and \( -e ABC -e DEF \) ( ABC or DEF ) --break --heading \ ← split up output into more readable format "v1.8.0" ← search only in commit with tag "v1.8.0" • Git Log Searching: $ git log -S ABC --oneline ← log only commits changing the number-of-occurrences of "ABC" e01503b commit msg ... Replace -S by -G for REGEX (vs string). ef49a7a commit msg ... • Line history Search: $ git log -L :funABC:file.c ← git will try to figure out what the bounds of function funABC are, then look through history and display every change made. If programming lang. is not supported, regex can be used like: -L '/int funABC/',/^}/:file.c range-of-lines or single-line-number can be used to filter out non interesting results.
Plumbings Summary • Summary extracted from: @[https://alexwlchan.net/a-plumbers-guide-to-git/1-the-git-object-store/] @[https://alexwlchan.net/a-plumbers-guide-to-git/2-blobs-and-trees/] @[https://alexwlchan.net/a-plumbers-guide-to-git/3-context-from-commits/] @[https://alexwlchan.net/a-plumbers-guide-to-git/4-refs-and-branches/] $º$ git init º ← creates an initial layout containing (Alternatively $º$git clone ... ª from existing remote repo ) ✓.git/objects/, ✓ .git/refs/heads/master, ✓ .git/HEAD (pointing to heads/master initially) ✓.git/description (used by UIs), ✓.git/info/exclude (local/non-commited .gitignore), ✓.git/config, ✓.git/hooks/ ~/.git/index ←············ binary file with staging area data (files 'git-added' but not yet commited) Use (porcelain) $º$ git ls-files º to see indexes files (git blobs) ┌─.git/objects/ ( ºGIT OBJECT STOREº ) ─────────┐ $º$ echo "..." ˃ file1.txt º │ │ $º$ git hash─object ─w file1.txtº ┌········→ • /af/3??? •···→ •/a1/12??? │ └───────────┬────────────────┘ · │ │(2nd commit) ┌············ save to Object Store┘content─addressable File System. · │ v v │RºWARNº: Original file name lost. We need to add a mapping · ┌··┬···→ • /5g/8... •···→ • /32/1... •┬··→• /a3/7... • │ (file_name,file_attr) ←→ hash to index like: · · · │ (1st commit) ┌··········┘├··→• /23/4... • │ $º$ git update-index --add file1.txt (git Plumbing) · · · │ · └···┐ │┌· Finally an snapshot of the index is created like a tree: · · · │ · · │·$º$ git write-tree º .git/index snapshot to tree · · · │ ├─····························┘ ( /af/9???... tree object will be added to Obj.Store) · · · │ · · │ $º$ git cat-file -p ${tree_hash} º · · · │ · · │ 100644 blob b133...... file1.txt ← Pointer+file_name to blob · · · │ · · │ 040000 tree 8972...... subdir... ← Pointer+"dirname" to (sub)tree · · · │ · · │ ... · · · │ · · │ ^^^^^^ ^^^^ ^^^^^^^^^^ ^^^^^^^^^^^^ · · · │ · · │ file type content Name of file · · · │ · · │ permis. address.ID · · · │ · · │ ☞KEY-POINT:☜ · · · │ v v │BºStarting at a tree, we can rebuild everything it points to º · · · │ • /af/9... •┬··→• /12/d... • │BºAll that rest is mapping trees to some "context" in history.º · · · │ └··→• /34/2... • │ · · · │ ^^^^^^^^^^^^ ^^^^^^^^^^^^ ^^^^^^^^^^^^ │ Plumbing "workflow" summary: · · · │ commits Trees Blobs │← Add 1+ Blobs → Update Index → Snapshot to Tree · · · └────────────────────────────────────────────────┘ → create commit pointing to tree of "historical importance" · · └·······················─┬──────────────────────┐ → create friendly refs to commits · · $º$ echo "initial commit" | git commit-tree 321..... º ← create commit pointing to tree of "historical importance" · · af3... BºUse also flag '-p $previous_commit' to BUILD A LINEAR ORDEREDº · · BºHISTORY OF COMMITS !!!º · · · · $º$ git cat-file -p af3.... º -p: show parent of given commit · · tree 3212f923d36175b185cfa9dcc34ea068dc2a363c ← Pointer to tree of interest · · author Alex Chan ... 1520806168 +0000 ← Context with author/commiter/ · · committer Alex Chan ... 1520806168 +0000 creation time/ ... · · ... · · "DEBUGGING" TIP: Use 'git cat-file -p 12d...' for pretty print ('-t' to display/debug object type) · · · └ ~/.git/refs/heads/dev ← ~/.git/HEAD (pointer to active ref) └·· ~/.git/refs/heads/master af3... ← Create friendly "master" alias to a commit like: ^^^^^^ $º$ git update-ref refs/heads/master º ← With plumbing each new commit requires a new Bºrefs in heads/ folder are º $º$ cat .git/refs/heads/master º git update-ref. BºCOMMONLY CALLED BRANCHS º af3... (pointer to commit) Now $º$ git cat-file -p master º "==" $º$ git cat-file -p af3...º $º$ git rev-parse master º ← check value of ref af3... $º$ git update-ref refs/heads/dev º ← Create second branch (ref in heads/ folder) $º$ git branch º dev * master ←························· current branch is determined by (contents of) $º$ cat .git/HEAD º ~/.git/HEAD. Using plumbing we can change it like ref: refs/heads/master $ git symbolic-ref HEAD refs/heads/dev
merge/rebase/cherry-pick • REF: @[https://stackoverflow.com/questions/9339429/what-does-cherry-picking-a-commit-with-git-mean] @[https://git-scm.com/docs/git-cherry-pick] ┌ INITIAL STATE ────────────────────────────────────────────────────────── │ • → • → • → • →H1 ← refs/heads/branch01 │ │ │ └─→ •x1→ •x2→ •H2 ← refs/heads/branch02 └──────────────────────────────────────────────────────────────────────── ┌ MERGE @[https://git-scm.com/docs/git-merge]──────────────────────────── │ add changes for other branch as single "M"erge commit │ $º$ git checkout branch01 ⅋⅋ git merge branch02 º │ • → • → • → • → •H1 → •M : M = changes of ( x1+x2+H2 ) │ │ ↑ │ └─→ •x1→ •x2→ •H2 └──────────────────────────────────────────────────────────────────────── ┌ REBASE @[https://git-scm.com/docs/git-rebase]────────────────────────── │ "replay" full list of commits to head of branch │ $º$ git checkout branch01 ⅋⅋ git rebase branch02 º │ • → • → • → • →H1 •x1→ •x2→ •H2 │ │ │ └─→ •x1→ •x2→ •H2 └──────────────────────────────────────────────────────────────────────── ┌ Squash N last commit into single one (rebase interactively) ────────── │ │ • → • → • → • →H1 ← refs/heads/branch01 │ │ │ └─→ • H2' (x1 + x2) │ │ $º$ git rebase --interactive HEAD~2 º │ pick 64d03e last-but-2 commit comment ← Different interesing actions are available │ pick 87718a last-but-1 commit comment Replace "pick" by "s"(quash) to mark commit │ pick 83118f HEAD commit comment to be squashed into single commit. │ · │ s 64d03e last-but-2 commit comment ←·┘ │ s 87718a last-but-1 commit comment (Save and close editor. Git will combine all │ s 83118f HEAD commit comment commits into first in list) │ The editor will "pop up" again asking to enter │ a commit message. └──────────────────────────────────────────────────────────────────────── ┌ CHERRY-PICK @[https://git-scm.com/docs/git-cherry-pick]──────────────── │ "Pick" unique-commits from branch and apply to another branch │ $º$ git checkout branch02 ⅋⅋ git cherry-pick -x branch02 º │ ··· • → • →H1 → ... └┬┘ │ │ • Useful if "source" branch is public, generating │ └─→ • → • →H2 →ºEº standardized commit message allowing co-workers │ to still keep track of commit origin. │ • Notes attached to the commit do NOT follow the │ cherry-pick. Use $º$ git notes copy "from" "to" º └────────────────────────────────────────────────────────────────────────
GitOps:
@[https://www.weave.works/blog/gitops-operations-by-pull-request]
• GitOps uses Git (DVCS) as aºsingle source of truthº for declarative infrastructure and
applications. Every developer within a team can issue pull requests against a
Git repository, and when merged, a "diff and sync" tool detects a difference
between the intended and actual state of the system. Tooling can then be
triggered to update and synchronise the infrastructure to the intended state.
Git Backlog Notes º############################ •º# Scaling Git with LFS/VFS # º############################ • VFS is designed to scale for projects with many files/branches and long commit history withºGit Object data store up to hundreds of Gigabytesº. @[https://github.com/Microsoft/VFSForGit] @[https://vfsforgit.org/] • Git Large File Storage (LFS)targets the problem of large individual files (1GB video files, ML training data, ...). replacing them with text pointers inside Git, while storing the file contents on a remote server. @[https://git-lfs.github.com/]
4 secrets encryption tools
@[https://www.linuxtoday.com/security/4-secrets-management-tools-for-git-encryption-190219145031.html]
Encrypt Git repos
@[https://www.atareao.es/como/cifrado-de-repositorios-git/]
Garbage Collector
- Git occasionally does garbage collection as part of its normal operation,
by invoking git gc --auto. The pre-auto-gc hook is invoked just before the
garbage collection takes place, and can be used to notify you that this is
happening, or to abort the collection if now isn’t a good time.
Sparse-Checkout
- sparse-checkout (Git v2.25+) allows to checkout just a subset
of a given monorepo, speeding up commands like git pull and
git status.
@[https://github.blog/2020-01-17-bring-your-monorepo-down-to-size-with-sparse-checkout/]
GPG signed commits
@[https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work]
BºGPG PRESETUPº
See @[General/cryptography_map.html?id=pgp_summary] for a summary on
how to generate and manage pgp keys.
BºGIT PRESETUPº
$ git config --global \
user.signingkey 0A46826A ← STEP 1: Set default key for tags+commits sign.
$ git tagº-s v1.5º-m 'my signed 1.5 tag' ← BºSigning tagsº
└──┬──┘ (follow instructions to sign)
replaces -a/--anotate
$ git show v1.5
tag v1.5
Tagger: ...
Date: ...
my signed 1.5 tag
Oº-----BEGIN PGP SIGNATURE----- º
OºVersion: GnuPG v1 º
Oº º
OºiQEcBAABAgAGBQJTZbQlAAoJEF0+sviABDDrZbQH/09PfE51KPVPlanr6q1v4/Utº
Oº... º
Oº=EFTF º
Oº-----END PGP SIGNATURE----- º
commit ...
$ git tagº-vºv1.4.2.1 ← GºVerify tagº
└┘ Note: signer’s pub.key must be in local keyring
object 883653babd8ee7ea23e6a5c392bb739348b1eb61
type commit
...
Gºgpg: Signature made Wed Sep 13 02:08:25 2006 PDT using DSA key ID º
GºF3119B9A º
Gºgpg: Good signature from "Junio C Hamano ˂junkio@cox.net˃" º
Gºgpg: aka "[jpeg image of size 1513]" º
GºPrimary key fingerprint: 3565 2A26 2040 E066 C9A7 4A7D C0C6 D9A4 º
GºF311 9B9A º
└──────────────────────────┬────────────────────────────────────┘
Or error similar to next one will be displayed:
gpg: Can't check signature: public key not found
error: could not verify the tag 'v1.4.2.1'
$ git commit -aº-Sº-m 'Signed commit' ← BºSigning Commits (git 1.7.9+)º
$ git log --show-signature -1 ← GºVerify Signaturesº
commit 5c3386cf54bba0a33a32da706aa52bc0155503c2
Gºgpg: Signature made Wed Jun 4 19:49:17 2014 PDT using RSA key IDº
Gº0A46826A º
Gºgpg: Good signature from "1stName 2ndName (Git signing key) º
Gº˂user01@gmail.com˃" º
Author: ...
...
$º$ git log --pretty="format:%h %G? %aN %s"º
^^^
check and list found signatures
Ex. Output:
5c3386cGºGº1stName 2ndName Signed commit
ca82a6dRºNº1stName 2ndName Change the version number
085bb3bRºNº1stName 2ndName Remove unnecessary test code
a11bef0RºNº1stName 2ndName Initial commit
You can also use the -S option with the git merge command to sign the
resulting merge commit itself. The following example both verifies
that every commit in the branch to be merged is signed and
furthermore signs the resulting merge commit.
$ git merge \ ← Gº# Verify signature at merge timeº
º--verify-signaturesº\
-S \ ← Sign merge itself.
signed-branch-to-merge ← Commit must have been signed.
$ git pull \ ← Gº# Verify signature at pull timeº
--verify-signatures
Git Hooks
Client Hooks @[https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks] BºClient-Side Hooksº - not copied when you clone a repository - to enforce a policy do on the server side - committing-workflow hooks: -ºpre-commitºhook: - First script to be executed. - used to inspect the snapshot that's about to be committed. - Check you’ve NOT forgotten something - make sure tests run - Exiting non-zero from this hook aborts the commit (can be bypassed with git commit --no-verify flag) -ºprepare-commit-msgºhook: - Params: - commit_message_path (template for final commit message) - type of commit - commit SHA-1 (if this is an amended commit) - run before the commit message editor is fired up but after the default message is created. - It lets you edit the default message before the commit author sees it. - Used for non-normal-commits with auto-generated messages - templated commit messages - merge commits - squashed commits - amended commits -ºcommit-msgºhook: - commit_message_path (written by the developer) -ºpost-commitºhook: - (you can easily get the last commit by running git log -1 HEAD) - Generally, this script is used for notification or something similar. -ºemail-workflowº hooks: - invoked by ºgit amº ^^^^^^ Apply a series of patches from a mailbox prepared by git format-patch -ºapplypatch-msgº: - Params: - temp_file_path containing the proposed commit message. -ºpre-applypatchº: - confusingly, it is run after the patch is applied but before a commit is made. - can be used it to inspect the snapshot before making the commit, run tests, inspect the working tree with this script. -ºpost-applypatchº: - runs after the commit is made. - Useful to notify a group or the author of the patch you pulled in that you’ve done so. - Others: -ºpre-rebaseºhook: - runs before you rebase anything - Can be used to disallow rebasing any commits that have already been pushed. -ºpost-rewriteºhook: - Params: - command_that_triggered_the_rewrite: - It receives a list of rewrites on stdin. - run by commands that replace commits such as 'git commit --amend' and 'git rebase' (though not by git filter-branch). - This hook has many of the same uses as the post-checkout and post-merge hooks. -ºpost-checkoutºhook: - Runs after successful checkout - you can use it to set up your working directory properly for your project environment. This may mean moving in large binary files that you don't want source controlled, auto-generating documentation, or something along those lines. -ºpost-mergeºhook: - runs after a successful merge command. - You can use it to restore data in the working tree that Git can't track, such as permissions data. It can likewise validate the presence of files external to Git control that you may want copied in when the working tree changes. -ºpre-pushºhook: - runs during git push, after the remote refs have been updated but before any objects have been transferred. - It receives the name and location of the remote as parameters, and a list of to-be-updated refs through stdin. - You can use it to validate a set of ref updates before a push occurs (a non-zero exit code will abort the push).
Server-Side Hooks (system administrator only) - Useful to enforce nearly any kind of policy in repository. - exit non-zero to rollback/reject push and print error message back to the client. ºpre-receive hookº: - first script to run - INPUT: STDIN reference list - Rollback all references on non-zero exit - Ex. - Ensure none of the updated references are non-fast-forwards. - do access control for all the refs and files being modifying by the push. ºupdate hookº: - similar to pre-receive hook.but ºrun once for each branch theº ºpush is trying to updateº (ussually just one branch is updated) - INPUT arguments: - reference name (for branch), - SHA-1 - SHA-1 refname= ARGV[0] ← ref.name for current branch oldrev = ARGV[1] ← º(SHA-1) original (current-in-server) ref. *1 newrev = ARGV[2] ← º(SHA-1) new (intention to) push ref. *1 user = $USER ← "Injected" by git when using ssh. *1: We can run over all commit from $oldrev to $newrev like git rev-list \ ← display a (sha1)commit per line to STDOUT oldrev..$newrev \ ← from $oldrev to $newrev while read SHA_COMMIT ; do git cat-file commit $SHA_COMMIT \ ← *1 | sed '1,/^$/d' ← Delete from line 1 to first match of empty-line (^$). *1 output format is similar to: | tree ... | parent ... | committer ... | | My commit Message | tree ... - user-name (if accesing through ssh) based on ssh public-key. - Exit 0: Update Exit !0: Rollback reference, continue with next one. ºpost-receiveº - can be used to update other services or notify users. - INPUT: STDIN reference list - Useful for: - emailing a list. - trigger CI/CD. - update ticket system (commit messages can be parsed for "open/closed/..." - RºWARNº: can't stop the push process. client will block until completion.
Advanced Git
- revert/rerere:
- Submodules:
- Subtrees:
- TODO: how subtrees differ from submodules
- how to use the subtree to create a new project from split content
- Interactive rebase:
- how to rebase functionality to alter commits in various ways.
- how to squash multiple commits down into one.
- Supporting files:
- Git attributes file and how it can be used to identify binary files,
specify line endings for file types, implement custom filters, and
have Git ignore specific file paths during merging.
- Cregit token level blame:
@[https://www.linux.com/blog/2018/11/cregit-token-level-blame-information-linux-kernel]
cregit: Token-Level Blame Information for the Linux Kernel
Blame tracks lines not tokens, cgregit blames on tokens (inside a line)
Implementations
Gitea painless self-hosted Git service(Gogs) @[https://gitea.io/] - Fork of gogs, since it was unmaintained.
Gerrit (by Google) @[https://www.gerritcodereview.com/index.html] Gerrit is a Git Server that provides: - Code Review: - One dev. writes code, another one is asked to review it. (Goal is cooperation, not fauilt-finding) @[https://docs.google.com/presentation/d/1C73UgQdzZDw0gzpaEqIC6SPujZJhqamyqO1XOHjH-uk/] - UI for seing changes. - Voting pannel. - Access Control on the Git Repositories. - Extensibility through Java plugins. @[https://www.gerritcodereview.com/plugins.html] Gerrit does NOT provide: - Code Browsing - Code SEarch - Project Wiki - Issue Tracking - Continuous Build - Code Analyzers - Style Checkers
GIT Commit Standard Emojis
@[https://gist.github.com/parmentf/035de27d6ed1dce0b36a]
ºCommit type Emoji Graphº
Initial commit :tada: 🎉
Version tag :bookmark: 🔖
New feature :sparkles: ✨
Bugfix :bug: 🐛
Metadata :card_index: 📇
Documentation :books: 📚
Documenting src :bulb: 💡
Performance :racehorse: 🐎
Cosmetic :lipstick: 💄
Tests :rotating_light: 🚨
Adding a test :white_check_mark: ✅
Make a test pass :heavy_check_mark: ✔️
General update :zap: ⚡️
Improve format :art: 🎨
/structure
Refactor code :hammer: 🔨
Removing stuff :fire: 🔥
CI :green_heart: 💚
Security :lock: 🔒
Upgrading deps. :arrow_up: ⬆️
Downgrad. deps. :arrow_down: ⬇️
Lint :shirt: 👕
Translation :alien: 👽
Text :pencil: 📝
Critical hotfix :ambulance: 🚑
Deploying stuff :rocket: 🚀
Work in progress :construction: 🚧
Adding CI build system :construction_worker: 👷
Analytics|tracking code :chart_with_upwards_trend: 📈
Removing a dependency :heavy_minus_sign: ➖
Adding a dependency :heavy_plus_sign: ➕
Docker :whale: 🐳
Configuration files :wrench: 🔧
Package.json in JS :package: 📦
Merging branches :twisted_rightwards_arrows: 🔀
Bad code / need improv. :hankey: 💩
Reverting changes :rewind: ⏪
Breaking changes :boom: 💥
Code review changes :ok_hand: 👌
Accessibility :wheelchair: ♿️
Move/rename repository :truck: 🚚
GitHub: Custom Bug/Feature-request templates
RºWARNº: Non standard (Vendor lock-in) Microsoft extension.
º$ cat .github/ISSUE_TEMPLATE/bug_report.mdº
| ---
| name: Bug report
| about: Create a report to help us improve
| title: ''
| labels: ''
| assignees: ''
|
| ---
|
| **Describe the bug**
| A clear and concise description of what the bug is.
|
| **To Reproduce**
| Steps to reproduce the behavior:
| 1. Go to '...'
| 2. Click on '....'
| 3. Scroll down to '....'
| 4. See error
|
| **Expected behavior**
| A clear and concise description of what you expected to happen.
|
| ...
º$ cat .github/ISSUE_TEMPLATE/feature_request.mdº
| ---
| name: Feature request
| about: Suggest an idea for this project
| title: ''
| labels: ''
| assignees: ''
|
| ---
|
| **Is your feature request related to a problem? Please describe.**
| A clear and concise description of what the problem is....
|
| **Describe the solution you'd like**
| A clear and concise description of what you want to happen.
|
| **Describe alternatives you've considered**
| A clear and concise description of any alternative solutions or features you've considered.
|
| **Additional context**
| Add any other context or screenshots about the feature request here.
º$ cat ./.github/pull_request_template.mdº
...
º$ ./.github/workflows/* º
RºWARNº: Non standard (Vendor lock-in) Microsoft extension.
@[https://docs.github.com/en/free-pro-team@latest/actions/learn-github-actions]
Git Secrets
https://github.com/awslabs/git-secrets#synopsis
- Prevents you from committing passwords and other sensitive
information to a git repository.
What's new
-º2.28:º
@[https://github.blog/2020-07-27-highlights-from-git-2-28/]
- Git 2.28 takes advantage of 2.27 commit-graph optimizations to
deliver a handful of sizeable performance improvements.
-º2.27:º
- commit-graph file format was extended to store changed-path Bloom
filters. What does all of that mean? In a sense,
this new information helps Git find points in history that touched a
given path much more quickly (for example, git log -- ˂path˃, or git
blame).
-º2.25:º
@[https://www.infoq.com/news/2020/01/git-2-25-sparse-checkout/]
500+ changes since 2.24.
º[performance]º
Sparse checkouts are one of several approaches Git supports to improve [scalability]
performance when working with big(huge or monolithic) repositories. [monolitic]
They are useful to keep working directory clean by specifying which
directories to keep. This is useful, for example, with repositories
containing thousands of directories.
See also: http://schacon.github.io/git/git-read-tree.html#_sparse_checkout
-º2.23:º
https://github.blog/2019-08-16-highlights-from-git-2-23
Forgit: Interactive Fuzzy Finder
@[https://www.linuxuprising.com/2019/11/forgit-interactive-git-commands-with.html]
- It takes advantage of the popular "fzf" fuzzy finder to provide
interactive git commands, with previews.
Isomorphic Git: 100% JS client
@[https://isomorphic-git.org/] !!!
- Features:
- clone repos
- init new repos
- list branches and tags
- list commit history
- checkout branches
- push branches to remotes
- create new commits
- git config
- read+write raw git objects
- PGP signing
- file status
- merge branches
Git Monorepos
(Big) Monorepos in Git:
https://www.infoq.com/presentations/monorepos/
https://www.atlassian.com/git/tutorials/big-repositories
Git: Symbolic Ref best-patterns
@[https://stackoverflow.com/questions/4986000/whats-the-recommended-usage-of-a-git-symbolic-reference]
GitHub: Search by topic
https://help.github.com/en/github/searching-for-information-on-github/searching-topics
Ex:search by topic ex "troubleshooting" and language "java"
https://github.com/topics/troubleshooting?l=java
Gitsec
@[https://github.com/BBVA/gitsec]
gitsec is an automated secret discovery service for git that helps
you detect sensitive data leaks.
gitsec doesn't directly detect sensitive data but uses already
available open source tools with this purpose and provides a
framework to run them as one.
Unbreakable Branches
@[https://github.com/AmadeusITGroup/unbreakable-branches-jenkins]
- plugins for Bitbucket and Jenkins trying to fix next problem:
Normal Pull Request workflow:
Open pull-request (PR) to merge changes in target-branch
→ (build automatically triggered)
→ build OK
repo.owner merges PR
→ second build triggered on target-branch
→Rºsecond build randomnly fails º
Rºleading to broken targeted branchº
└───────────────┬───────────────┘
Reasons include:
- Race condition: Parallel PR was merged in-between
- Environment issue (must never happens)
- lenient dependency declaration got another version
leading to a build break
- If the Jenkins job is eligible to unbreakable build
(by having environment variables such as UB_BRANCH_REF)
at the end of the build a notification to Bitbucket is
sent according to the build status.
(or manually through two verbs: ubValidate|ubFail)
- Difference stashnotifier-plugin:
- stashplugin reports status-on-a-commit
- unbreakable build a different API is dedicated on Bitbucket.
- On the Bitbucket side:
- GIT HEAD@target-branch moved to top-of-code to be validated in PR
(target-branch can then always have a successful build status).
- Security restrictions added to Bitbucket:
(once you activate the unbreakable build on a branch for your repository)
- merge button replaced by merge-request-button to queue the build.
- The merge will happen automatically at the end of the build if the build succeeds
- direct push on the branch is forbidden
-BºMerge requests on different PRs will process the builds sequentiallyº
- Prerequisites to run the code locally:
- Maven (tested agains 3.5)
- Git should be installed
- PRE-SETUP:
- Install UnbreakableBranch plugin at Bitbucket
- bitbucketBranch source plugin Jenkins plugin should be
a patched so that mandatory environment variables are
injected. RºNote that this plugin hasn't been released yetº
Filter-repo @[https://github.com/newren/git-filter-repo/] - Create new repository from old ones, keeping just the history of a given subset of directories. (Replace: (buggy)filter-branch @[https://git-scm.com/docs/git-filter-branch]) - Python script for rewriting history: - cli for simple use cases. - library for writing complex tools. - Presetup: - git 2.22.0+ (2.24.0+ for some features) - python 3.5+ $ git filter-repo \ --path src/ \ ← commits not touching src/ removed --to-subdirectory-filter my-module \ ← rename src/** → my-module/src/** --tag-rename '':'my-module-' add 'my-module-' prefix to any tags (avoid any conflicts later merging into something else) BºDesign rationale behind filter-repoº: - None existing tools with similr features. - [Starting report] Provide analysis before pruning/renaming. - [Keep vs. remove] Do not just allow to remove selected paths but to keep certain ones. (removing all paths except a subset can be painful. We need to specify all paths that ever existed in any version of the repository) - [Renaming] It should be easy to rename paths: - [More intelligent safety]. - [Auto shrink] Automatically remove old cruft and repack the repository for the user after filtering (unless overridden); - [Clean separation] Avoid confusing users (and prevent accidental re-pushing of old stuff) due to mixing old repo and rewritten repo together. - [Versatility] Provide the user the ability to extend the tool ... rich data structures (vs hashes, dicts, lists, and arrays difficult to manage in shell) ... reasonable string manipulation capabilities - [Old commit references] Provide a way for users to use old commit IDs with the new repository. - [Commit message consistency] Rewrite commit messages pointing to other commits by ID. - [Become-empty pruning] empty commits should be pruned. - [Speed] - Work on filter-repo and predecessor has driven improvements to fast-export|import (and occasionally other commands) in core git, based on things filter-repo needs to do its work: BºManual Summaryº: @[https://htmlpreview.github.io/?https://github.com/newren/git-filter-repo/blob/docs/html/git-filter-repo.html] - Overwrite entire repository history using user-specified filters. (WARN: deletes original history) - Use cases: - stripping large files (or large directories or large extensions) - stripping unwanted files by path (sensitive secrests) [secret] - Keep just an interesting subset of paths, remove anything else. - restructuring file layout. Ex: - move all files subdirectory - making subdirectory as new toplevel. - Merging two directories with independent filenames. - ... - renaming tags - making mailmap rewriting of user names or emails permanent - making grafts or replacement refs permanent - rewriting commit messages
Reference Script TODO:(0) Declaring Variables: We must declare the variable according to its data type and uses. When variables remain undeclared the bash might fail to execute the command related to it. Variables can be declared either globally or locally in the script. # variable declaration readonly PATH_TEST = "./conf.d/test.conf" declare -r -i x=30 # -r : read-only function XXX(){ local -r name = ${HOME} # local: scoped to function!!! } Source: @[https://github.com/earizon/utility_shell_scripts/blob/master/scriptTemplate.sh] #!/bin/bash OUTPUT="$(basename $0).$(whoami).log" # ← $("command") takes command STDOUT as effective value # $(whoami) will avoid collisions among # different users even if writing to the # same directory and serves as audit trail. # This happens frequently in DevOps when # executing in sudo/non-sudo contexts. if [ ! -d LOGS ] ; then mkdir LOGS ; fi # opinionated. Save to LOGS ln -sf ${OUTPUT} link_last_log # opinionated. Improve UX, create link to latest log exec 3˃⅋1 # Copy current STDOUT to ⅋3 exec 4˃⅋2 # Copy current STDERR to ⅋4 echo "Cloning STDOUT/STDERR to ${OUTPUT}" exec ⅋˃ ˃(tee -a "$OUTPUT") # Redirect to STDOUT and file REF: exec 2˃⅋1 echo "message logged to file ⅋ console" GLOBAL_EXIT_STATUS=0 WD=$(pwd) # TIP: write down current work dir are use it # to avoid problems when changing dir ("cd") # randomnly throughout the script execution OºFILE_RESOURCE_01="${WD}/data/temp_data.csv"º QºLOCK="/tmp/$(basename $0).lock"º function funCleanUp() { set +e echo "Cleaning resource and exiting" rm -fOº${FILE_RESOURCE_01}º } ºtrapºfunCleanUp EXIT # ← Clean any resource on exit if [ ! ${STOP_ON_ERR_MSG} ] ; then # default and recomended behaviour: Fail fast # REF: @[https://en.wikipedia.org/wiki/Fail-fast] STOP_ON_ERR_MSG=true ······························┐ fi | ERR_MSG="" | function funThrow { | if [[ $STOP_ON_ERR_MSG != false ]] ; then ←-···{ echo "ERR_MSG DETECTED: Aborting now due to " | echo -e ${ERR_MSG} | if [[ $1 != "" ]]; then | GLOBAL_EXIT_STATUS=$1 ; | elif [[ $GLOBAL_EXIT_STATUS == 0 ]]; then | GLOBAL_EXIT_STATUS=1 ; | fi | exit $GLOBAL_EXIT_STATUS ←·······················┘ else echo "ERR_MSG DETECTED: " echo -e ${ERR_MSG} echo "WARN: CONTINUING WITH ERR_MSGS " GLOBAL_EXIT_STATUS=1 ; fi ERR_MSG="" } Qºexec 100˃${LOCK}º # Simple linux-way to use locks. Qºflock 100º # First script execution will hold the lock if [[ $? != 0 ]] ; then # Next ones will have to wait. Use -w nSecs ERR_MSG="HOME ENV.VAR NOT DEFINED" # to fail after timeout or -n to fail-fast funThrow 10 ; # lock will automatically be liberated on fi # exit. (no need to unlock manually) # REF Bº# SIMPLE WAY TO PARSE ARGUMENTS WITH while-loopº while [ $# -gt 0 ]; do # $# number of arguments case "$1" in -l|--list) echo "list arg" shift 1 # ºconsume arg ← $# = $#-1 ;; -p|--port) export PORT="${2}:" Bºshift 2º #← consume arg+value ← $# = $#-2 ;; -h|--host) export HOST="${2}:" Bºshift 2º #← consume arg+value ← $# = $#-2 ;; *) echo "non-recognised option '$1'" Bºshift 1º #← consume arg ← $# = $#-1 esac done set -e # exit on ERR_MSG function preChecks() { # Check that ENV.VARs and parsed arguments are in place if [[ ! ${HOME} ]] ; then ERR_MSG="HOME ENV.VAR NOT DEFINED" ; funThrow 41 ; fi if [[ ! ${PORT} ]] ; then ERR_MSG="PORT ENV.VAR NOT DEFINED" ; funThrow 42 ; fi if [[ ! ${HOST} ]] ; then ERR_MSG="HOST ENV.VAR NOT DEFINED" ; funThrow 43 ; fi set -u # From here on, ANY UNDEFINED VARIABLE IS CONSIDERED AN ERROR. } function funSTEP1 { echo "STEP 1: $HOME, PORT:$PORT, HOST: $HOST" } function funSTEP2 { # throw ERR_MSG ERR_MSG="My favourite ERROR@funSTEP2" funThrow 2 } cd $WD ; preChecks cd $WD ; funSTEP1 cd $WD ; funSTEP2 echo "Exiting with status:$GLOBAL_EXIT_STATUS" exit $GLOBAL_EXIT_STATUS
Init Vars complete Shell parameter expansion list available at: - @[http://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html] var1=$1 # init var $1 with first param var1=$# # init var $1 with number of params var1=$! # init var with PID of last executed command. var1=${parameter:-word} # == $parameter if parameter set or 'word' (expansion) var1=${parameter:=word} # == $parameter if parameter set or 'word' (expansion), then parameter=word var1=${parameter:?word} # == $parameter if parameter set or 'word' (expansion) written to STDERR, then exit. var1=${parameter:+word} # == var1 if parameter set or 'word' (expansion). ${parameter:offset} ${parameter:offset:length} # Substring Expansion. It expands to up to length characters of the value of parameter starting at the character specified by offset. If parameter is '@', an indexed array subscripted by '@' or '*', or an associative array name, the results differ as described below.
Temporal Files TMP_FIL=$(mktemp) TMP_DIR=$(mktemp --directory)
Barrier synchronization UUID:[9737647d-58dc-4999-8db4-4cd3c2682edd] Wait for background jobs to complete example: ( ( sleep 3 ; echo "job 1 ended" ) ⅋ ( sleep 1 ; echo "job 2 ended" ) ⅋ ( sleep 1 ; echo "job 3 ended" ) ⅋ ( sleep 9 ; echo "job 4 ended" ) ⅋ wait ${!} # alt.1: Wait for all background jobs to complete # wait %1 %2 %3 # alt.2: Wait for jobs 1,2,3. Do not wait for job 4 echo "All subjobs ended" ) ⅋
bash REPL loop REPL stands for Read-eval-print loop: More info at: @[https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop] # Define the list of a menu item ºselectºOºlanguageººinºC# Java PHP Python Bash Exit ºdoº #Print the selected value if [[ Oº$languageº == "Exit" ]] ; then exit 0 fi echo "Selected language is $language" ºdoneº
trap: Exit script cleanly @[https://www.putorius.net/using-trap-to-exit-bash-scripts-cleanly.html]
test (shell conditionals) (man test summary from GNU coreutils) test EXPRESSION # ← EXPRESSION true/false sets the exit status. [ EXPRESSION ] -n STRING # STRING length ˃0 # (or just STRING) -z STRING # STRING length == 0 STRING1 = STRING2 # String equality STRING1 != STRING2 # String in-equality INTEGER1 -eq INTEGER2 # == INTEGER1 -ge INTEGER2 # ˂= INTEGER1 -gt INTEGER2 INTEGER1 -le INTEGER2 INTEGER1 -lt INTEGER2 INTEGER1 -ne INTEGER2 ^^^^^^^^ BºNOTE:º INTEGER can be -l STRING (length of STRING) ºFILE TEST/COMPARISIONº RºWARN:º Except -h/-L, all FILE-related tests dereference symbolic links. -e FILE #ºFILE existsº -f FILE # FILE exists and is a1regular fileº -h FILE # FILE exists and is aºsymbolic linkº (same as -L) -L FILE # (same as -h) -S FILE # FILE exists and is aºsocketº -p FILE #ºFILE exists and is a named pipeº -s FILE # FILE exists and has aºsize greater than zeroº -r FILE # FILE exists andºread permissionºis granted -w FILE # FILE exists andºwrite permissionºis granted -x FILE # FILE exists andºexec permissionºis granted FILE1 -ef FILE2 # ← same device and inode numbers FILE1 -nt FILE2 # FILE1 is newer (modification date) than FILE2 FILE1 -ot FILE2 # FILE1 is older (modification date) than FILE2 -b FILE # FILE exists and is block special -c FILE # FILE exists and is character special -d FILE #ºFILE exists and is a directoryº -k FILE # FILE exists and has its sticky bit set -g FILE # FILE exists and is set-group-ID -G FILE # FILE exists and is owned by the effective group ID -O FILE # FILE exists and is owned by the effective user ID -t FD file descriptor FD is opened on a terminal -u FILE FILE exists and its set-user-ID bit is set BOOLEAN ADITION RºWARNº: inherently ambiguous. Use EXPRESSION1 -a EXPRESSION2 # AND # 'test EXPR1 ⅋⅋ test EXPR2' is prefered EXPRESSION1 -o EXPRESSION2 # OR # 'test EXPR1 || test EXPR2' is prefered RºWARN,WARN,WARNº: your shell may have its own version of test and/or '[', which usually supersedes the version described here. Use /usr/bin/test to force non-shell ussage. Full documentation at: @[https://www.gnu.org/software/coreutils/]
Bash 4+ Maps (also known as associative array or hashtable) Bash Maps can be used as "low code" key-value databases. Very useful for daily config/devops/testing task. Ex: #!/bin/bash # ← /bin/sh will fail. Bash 4+ specific Bºdeclare -A map01º # ←ºSTEP 1)ºdeclare Map map01["key1"]="value1" # ←ºSTEP 2)ºInit with some elements. map01["key2"]="value2" # Visually map01 will be a table similar to: map01["key3"]="value3" # key │ value # ─────┼─────── # key1 │ value1 ← key?, value? can be any string # key2 │ value2 # key3 │ value3 keyN="key2" # ←ºSTEP 3)ºExample Ussage ${map01[${key_var}]} # ← fetch value for key "key2" ${!map01[@]} # ← fetch keys . key2 key3 key1 ${map01[@]} # ← fetch values. (value2 value3 value1) for keyN in "${!map01[@]}"; # ← walk over keys: do # (output) echo "$keyN : ${map01[$keyN]}" # key1 : value1 done # key2 : value2 # key3 : value3
Bash-it @[https://www.tecmint.com/bash-it-control-shell-scripts-aliases-in-linux/] - bundle of community Bash commands and scripts for Bash 3.2+, which comes with autocompletion, , aliases, custom functions, .... - It offers a useful framework for developing, maintaining and using shell scripts and custom commands for your daily work.
Curl (network client swiss Army nkife) Summary
-
- Suport for DICT, FILE, FTP, FTPS, GOPHER, HTTP GET/POST, HTTPS, HTTP2, IMAP,
IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS,
SMTP, SMTPS, TELNET, TFTP, unix socket protocols.
- proxy support.
- kerberos support.
- HTTP cookies, etags
- file transfer resume.
- Metalink
- SMTP / IMAP Multi-part
- HAProxy PROXY protocol
- ...
BºHTTP Exampleº
$ curl http://site.{one,two,three}.com \
--silent \ ← Disable progress meter
--anyauth \ ← make curl figure out auth. method
(--basic, --digest, --ntlm, and --negotiate)
not recommended if uploading from stdin since
data can be sent 2+ times
- Used together with -u, --user.
--cacert file_used_to_verify_peer \ ← Alt: Use CURL_CA_BUNDLE
- See also --capath dir, --cert-status, --cert-type PEM|DER|...
--cert certificate[:password] \ ← Use cert to indentify curl client
--ciphers list of TLS_ciphers \
--compressed \ ← (HTTP) Request compressed response. Save uncompressed response.
--config text_file_with_curl_args \
--connect-timeout sec_number \
--create-dirs \ ← When using --output
--data-binary data \ ← HTTP POST alt 1: posts data with no extra processing whatsoever.
Or @data_file
--data-urlencode data \ ← HTTP POST alt 2
--data data \ ← HTTP POST alt 3: post data in the same way that a browser does for formats
(content-type application/x-www-form-urlencoded)
--header ...
--limit-rate speed
--location \ ← follow redirects
--include \ ← Include the HTTP response headers in the output.
See also -v, --verbose.
--oauth2-bearer ... \ ← (IMAP POP3 SMTP)
--fail-early \ ← Fail as soon as possible
--continue-at - \ ← Continue a partial download
--output out_file \ ← Write output to file (Defaults to stdout)
curl --list-only https://..../dir1/ ← List contents of remote dir
Kapow!: Shell Script to HTTP API @[https://github.com/BBVA/kapow] by BBVA-Labs Security team members. " If you can script it, you can HTTP it !!!!" Ex: Initial Script: $ cat /var/log/apache2/access.log | grep 'File does not exist' To expose it as HTTP: $ cat search-apache-errors #!/usr/bin/env sh kapow route add /apache-errors - ˂-'EOF' cat /var/log/apache2/access.log | grep 'File does not exist' | kapow set /response/body EOF Run HTTP Service like: $ kapow server search-apache-errors ← Client can access it like curl http://apache-host:8080/apache-errors [Fri Feb 01 ...] [core:info] File does not exist: ../favicon.ico ... We can share information without having to grant SSH access to anybody. BºRecipe: Run script as a given user:º # Note that `kapow` must be available under $PATH relative to /some/path kapow route add /chrooted\ -e 'sudo --preserve-env=KAPOW_HANDLER_ID,KAPOW_DATA_URL \ chroot --userspec=sandbox /some/path /bin/sh -c' \ -c 'ls / | kapow set /response/body'
WebHook (TODO) @[https://github.com/adnanh/webhook] - lightweight incoming webhook server to run shell commands You can also pass data from the HTTP request (such as headers, payload or query variables) to your commands. webhook also allows you to specify rules which have to be satisfied in order for the hook to be triggered. - For example, if you're using Github or Bitbucket, you can use webhook to set up a hook that runs a redeploy script for your project on your staging server, whenever you push changes to the master branch of your project. - Guides featuring webhook: - Webhook and JIRA by @perfecto25 [jira] - Trigger Ansible AWX job runs on SCM (e.g. git) commit by @jpmens [ansible] - Deploy using GitHub webhooks by @awea [git][github] - Setting up Automatic Deployment and Builds Using Webhooks by Will Browning - Auto deploy your Node.js app on push to GitHub in 3 simple steps by [git][github] Karolis Rusenas - Automate Static Site Deployments with Salt, Git, and Webhooks by [git] Linode - Using Prometheus to Automatically Scale WebLogic Clusters on [prometheus][k8s][weblogic] Kubernetes by Marina Kogan - Github Pages and Jekyll - A New Platform for LACNIC Labs by Carlos Martínez Cagnazzo - How to Deploy React Apps Using Webhooks and Integrating Slack on [slack] Ubuntu by Arslan Ud Din Shafiq - Private webhooks by Thomas - Adventures in webhooks by Drake - GitHub pro tips by Spencer Lyon [github] - XiaoMi Vacuum + Amazon Button = Dash Cleaning by c0mmensal - Set up Automated Deployments From Github With Webhook by Maxim Orlov VIDEO: Gitlab CI/CD configuration using Docker and adnanh/webhook to deploy on VPS - Tutorial #1 by Yes! Let's Learn Software
Jenkins Summary/101
Bº##################º Bº# External Links #º Bº##################º @[https://jenkins.io/doc/] @[https://jenkins.io/user-handbook.pdf] @[https://jenkins.io/doc/book/] @[https://github.com/sahilsk/awesome-jenkins] - @[https://jenkins.io/doc/book/using/using-credentials/] - @[https://jenkins.io/doc/book/pipeline/running-pipelines] - @[https://jenkins.io/doc/book/pipeline/multibranch] Branches and Pull Requests - @[https://jenkins.io/doc/book/pipeline/docker] - @[https://jenkins.io/doc/book/pipeline/shared-libraries] Extending with Shared Libraries - @[https://jenkins.io/doc/book/pipeline/development] Pipeline Development Tools - @[https://jenkins.io/doc/book/pipeline/syntax] ºPipeline Syntaxº - @[https://jenkins.io/doc/book/pipeline/pipeline-best-practices]ºPipeline Best Practicesº - @[https://jenkins.io/doc/book/pipeline/scaling-pipeline] - @[https://jenkins.io/doc/book/blueocean] - @[https://jenkins.io/doc/book/blueocean/getting-started] - @[https://jenkins.io/doc/book/blueocean/creating-pipelines] - @[https://jenkins.io/doc/book/blueocean/dashboard] - @[https://jenkins.io/doc/book/blueocean/activity] - @[https://jenkins.io/doc/book/blueocean/pipeline-run-details] - @[https://jenkins.io/doc/book/blueocean/pipeline-editor] - @[https://jenkins.io/doc/book/managing] ºManaging Jenkinsº - @[https://jenkins.io/doc/book/managing/system-configuration] - @[https://jenkins.io/doc/book/managing/security] - @[https://jenkins.io/doc/book/managing/tools] - @[https://jenkins.io/doc/book/managing/plugins] - @[https://jenkins.io/doc/book/managing/cli] - @[https://jenkins.io/doc/book/managing/script-console] - @[https://jenkins.io/doc/book/managing/nodes] - @[https://jenkins.io/doc/book/managing/script-approval] - @[https://jenkins.io/doc/book/managing/users] - @[https://jenkins.io/doc/book/system-administration] ºSystem Administrationº - @[https://jenkins.io/doc/book/system-administration/backing-up] Backing-up/Restoring Jenkins - @[https://jenkins.io/doc/book/system-administration/monitoring] Monitoring Jenkins - @[https://jenkins.io/doc/book/system-administration/security] Securing Jenkins - @[https://jenkins.io/doc/book/system-administration/with-chef] - @[https://jenkins.io/doc/book/system-administration/with-puppet] Bº##############################º •Bº# Pipeline injected ENV.VARS #º Bº##############################º • ${BASE_JENKINS_URL}/pipeline-syntax/globals#env ← FULL LIST OF ENV.VARs: • $env.BUILD_ID : $env.BUILD_NUMBER : $env.BUILD_TAG : String of jenkins-${JOB_NAME}-${BUILD_NUMBER} ^^^^^^^^^^^^ Useful to subclassify resource/jar/etc output artifacts $env.BUILD_URL : URL where results of this build can be found (Ex.: http://buildserver/jenkins/job/MyJobName/17/) $env.EXECUTOR_NUMBER: Unique number ID for current executor in same machine $env.JAVA_HOME : JAVA_HOME configured for a given job $env.JENKINS_URL : $env.JOB_NAME : Name of the project of this build $env.NODE_NAME : 'master', 'slave01',... $env.WORKSPACE : absolute path for workspace
Jenkinsfile REF: @[https://jenkins.io/doc/book/pipeline/jenkinsfile/] @[https://jenkins.io/doc/pipeline/steps/] • ┌ PIPELINE EXECUTION SUMMARY ──────────────────────┐ │ INPUT PROCESSING OUTPUT │ │ =========== =========== =======================│ │ Jenkinsfile → Jenkins → ✓ archived built artif.│ │ ✓ test results │ │ ✓ full console output │ │ ✓ Pipeline status │ │ (unstable,success │ │ failure,changed) │ └──────────────────────────────────────────────────┘ • ┌ DECLARATIVE SYNTAX EXAMPLE ──────┐ │ pipeline { │ │ environment { │ │ T1 = 'development' ← Env.var with global visibility │ CC = """${sh( ← Env.var set from shell STDOUT. │ ºreturnStdout:ºtrue, │ trailing whitespace appended. │ script: 'echo "clang"' │ (Use .trim() to fix) │ )}""" │ │ AWS_SECRET_ACCESS_KEY = │ │ ºcredentialsº('aws-acc-key')← Secret protected by Jenkins [security.secret_management] │ } │ *1 │ │ │ parameters { ← parameters allowed to be modified ºat runtimeº │ string(name: 'Greeting', ← Referenced as ${params.Greeting} │ defaultValue: 'Hello',│ │ description: 'Hi!') │ │ } │ │ │ │ agent any ← allocate anºEXECUTOR AND WORKSPACEº, it ensures │ │ that SRC. REPO. is IMPORTED TO WORKSPACE in next stages │ │ │ stages { ← ORDERED list of stages in Pipeline │ │ │ stage('clone') { │ │ checkout Gºscmº ← checkout code from scm ("git clone ...") *2 │ checkout poll: false, │Gºscmº: SPECIAL VAR. TELLING JENKINS TO │ scm: [ │ USE SAME REPO./REVISION USED TO │ $class: 'GitSCM', │ CHECKOUT (CLONE) JENKINSFILE │ branches: [[name: 'dev']], │ doGenerateSubmoduleConfigurations: false, │ extensions: [], │ │ submoduleCfg: [], │ │ userRemoteConfigs: [ │ │ [url: 'https://github.com/user01/project01.git', │ credentialsId: 'UserGit01'] │ ] │ │ ] │ │ } │ │ │ │ stage('Build') { ← (trans|com)pile/package/... using maven/npm/... plugins │ │ │ environment { ← ENV.VAR with local stage visibility │ msg1 = "Building..." │ (also visible to shell scripts) │ EXIT = """ ← Initialized to returned status code shell exec. │ ${sh( │ """ .... """ to embed multi-stage script. │ ºreturnStatus:ºtrue, │ │ script: 'exit 1' │ │ )}""" │ │ } │ │ │ │ steps { ← ORDERED list of steps in stage. │ echo "º${msg1}º:..." ← shell-like interpolation for double-coutes │ sh 'printenv' ← msg1 and EXIT available here │ sshagent ( │ │ crendentials: ['keys'] ←····┬─ ssh with help of agent │ ) │ │ (ssh-agent plugin │ { │ │ needed) │ sh 'ssh user@remoteIP' ←····┘ │ } │ │ } │ NOTE: │ } │ ┌─ GROOVY SYNTAX SUGAR ──────┐ │ │ │ sh([script: 'echo hello']) ← standard funct. call │ stage('Test') { │ │ sh script: 'echo hello' ← shortcut │ steps { ... } │ │ sh 'echo hello' ← ultra-sortcut (Single │ } │ └────────────────────────────┘ param fun.only │ │ │ stage('Deploy') { │ │ when { ← Conditional steps execution *1 For "complex" secrets use SNIPPET GENERATORS: │ expression { │ ┌──────────────┬───────────────────────────────────┐ │ currentBuild.result == │SUCCESS' │ GENERATOR │ PARAMS │ │ } │ │──────────────┼───────────────────────────────────┤ │ } │ │ SSH Priv.Key │ • Key File Variable │ │ steps { sh '...' } │ │ │ • Passphrase Variable │ │ } │ │ │ • Username Variable │ │ } /* stages end */ │ │──────────────┼───────────────────────────────────┤ │ │ Bº===============º │ Credentials │ • SSH priv/pub keys stored │ │ post { ← BºHANDLING ERRORSº │ │ in Jenkins. │ │ always { │ Bº===============º │──────────────┼───────────────────────────────────┤ │ junit '**/target/*.xml'│ │ (PKCS#12) │ • Keystore Variable │ │ } │ │ Certificate │ Jenkins temporary assigns it to │ │ ºfailureº{ │ │ │ secure location of Cert's KeyS. │ │ mail to:team@bla.com, ← Bºe-MAIL NOTIFICATIONSº │ │ • Password Variable (Opt) │ │ subject: '...' │ [notifications.jenkins] │ │ • Alias Variable (Opt) │ │ } │ │ │ • Credentials: Cert.credentials │ │ unstable { ... } ← PIPELINE RESULT │ │ stored in Jenkins. Its value is │ │ success { ... } │ STATUS CLASSIFICATION │ │ the credential ID, which Jenkins│ │ failure { ... } │ │ │ writes out to generated snippet │ │ changed { ... } │ │──────────────┼───────────────────────────────────┤ │ } │ │ Docker Client│ • Handle Docker Host Cert.Auth. │ │ } │ │ Certificate │ │ └──────────────────────────────────┘ └──────────────┴───────────────────────────────────┘ ┌ *2 GIT REPO. CLONING ("checkouts" in Jenkin) SUMMARY ────────────────┐ │ REF: @[https://jenkins.io/doc/pipeline/steps/workflow-scm-step/] │ │ checkout([ │ │ $class : 'GitSCM', │ │ poll : false, │ ┌ *1 CloneOption Class ────────────────────────────────────────────┐ │ branches : [[name: commit]], │ │· shallow (boolean) : do NOT download history (Save time/disk) │ │ extensions: [ │ │· noTags (boolean) : do NOT download tags (Save time/disk) │ │ [$class: 'RelativeTargetDirectory', relativeTargetDir: reponame],│ │ (use only what specified in refspec) │ │ [$class: 'CloneOption', reference: "/var/cache/${reponame}"] ← │· depth (int) : Set shallow clone depth (Save time/disk) │ │ ], │ │· reference(String) : local folder with existing repository │ │ submoduleCfg: [], │ │ used by Git during clone operations. │ │ userRemoteConfigs: [ │ │· timeout (int) : timeout for clone/fetch ops. │ │ [credentialsId: 'jenkins-git-credentials', url: repo_url] │ │· honorRefspec(bool): init.clone using refspec (Save time/disk)│ │ ], │ └──────────────────────────────────────────────────────────────────┘ │ doGenerateSubmoduleConfigurations: false, │ │ ]) │ └──────────────────────────────────────────────────────────────────────┘ • ┌─ MULTIAGENT PIPELINES COMMON BUILD ──┐ Useful for multi─target builds/tests/... │ pipeline { │ reusing common builds in different agents. │ ºagent noneº │ │ stages { │ │ stage('clone') { ... } │ │ stage('Build') { │ │ ºagent anyº │ │ steps { │ │ ... │ │ Oºstashº([ │ │ includes: '**/target/*.jar',│ │ name:º'app'º ]) ←··┐ │ } │ · │ } │ · │ stage('Linux') { │ · copy named─stash FROM JENKINS MASTER TO CURRENT WORKSP. │ ºagent { label 'linux' }º │ · NOTE: Oºstashº = something put away for future use │ steps { │ · (In practice: Named cache of generated artifacts │ Oºunstashºº'app'º ←··• during same pipeline for reuse in further steps) │ sh '...' │ · Removed at pipeline termination. │ } │ · │ post { ... } │ · │ } │ · │ stage('Test on Windows') { │ · │ ºagent { label 'windows' }º │ · │ steps { │ · │ unstashº'app'º ←··┘ │ bat '...' │ │ } │ │ post { ... } │ │ } │ │ } │ │ } │ └──────────────────────────────────────┘ • ┌─ PARALLEL EXECUTION ───────────┐ │ stage('Test') { │ │ ºparallelº ←·• Execute linux⅋windows nodes │ ºlinux:º{ │ · in parallel. │ node('linux') { ←─┤ │ try { │ · │ unstash 'app' │ · │ sh 'make check' │ · │ } │ · │ finally { │ · │ junit '**/target/*.xml'│ · │ } │ · │ } │ · │ }, │ · │ ºwindows:º{ │ · │ node('windows') { ... } ←·┘ │ } │ │ } │ └────────────────────────────────┘
Admin 101 Bº######################º • Bº# EXPORT/IMPORT jobs #º [security.backup] Bº######################º REF: @[https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI] └ * ALT 1: Using jenkins-cli.jar * ******************************** • PRE-REQUISITES) · jenkins-cli.jar version must match Server version · jnlp ports need to be open • Script: JENKINS_CLI="java -jar ${JENKINS_HOME}/war/WEB-INF/jenkins-cli.jar -s ${SERVER_URL}" ${JENKINS_CLI} get-job job01 ˃ job01.xml ← job01.xml can be "gitted",... ${JENKINS_CLI} create-job job01 ˂ job01.xml RºWARN:º There are issues with bare naked ampersands in the XML such as when you have ⅋ in Groovy code. (REF: @[https://stackoverflow.com/questions/8424228/export-import-jobs-in-jenkins]) └ * ALT 2: USING CURL * ********************* SERVER_URL = "http://..." ← Without Authentication SERVER_URL = "http://${USER}:${API_TOKEN}@..." ← With Authentication $ curl -s ${SERVER_URL}/job/ºJOBNAME/config.xmlº˃ job01.xml ← EXPORT $ curl -X POST ${SERVER_URL}/createItem?name=JOBNAME' \ ← IMPORT --header "Content-Type: application/xml" -d job01.xml └ * ALT 3: Filesystem (backup) * ****************************** $ tar cjf _var_lib_jenkins_jobs.tar.bz2 /var/lib/jenkins/jobs Bº######################º • Bº# Dockerized Jenkins #º [low_code] Bº######################º $ docker run --rm -u root -p 8080:8080 \ -v jenkins-data:/var/jenkins_home \ ← if 'jenkins-data' Docker volumen \ doesn't exists it will be created \ -v /var/run/docker.sock:/var/run/docker.sock \ ← Jenkins need control of Docker to \ launch new Docker instances during \ the build process -v "$HOME":/home \ --name jenkins01 \ ← Allows to "enter" docker with: jenkinsci/blueocean $ docker exec -it jenkins01 bash
End-to-End Multibranch Pl.
@[https://jenkins.io/doc/tutorials/build-a-multibranch-pipeline-project/]
PREREQUISITES
-ºGitº
- Docker
┌──────────────┬────────────┬────────────┐
│ INPUT → JENKINS → OUTPUT │
│ ARTIFACTS → → ARTIFACTS │
├──────────────┼────────────┼────────────┤
│ Node.js │ build→test │ development│
│ React app │ │ production │
│ npm │ │ │
└──────────────┴────────────┴────────────┘
STEP 1) Setup local git repository
- clone:
$ git clone https://github.com/?????/building-a-multibranch-pipeline-project
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Forked from
@[https://github.com/jenkins-docs/building...]
- Creak dev/pro branches:
$ git branch development
$ git branch production
STEP 2) Add 'Jenkinsfile' stub (agent, stages sections) to repo
(initially in master branch)
STEP 3) Create new Pipeline in Jenkins Blue Ocean Interaface
browse to "http://localhost:8080/"
→ click "Create a new Pipeline"
→ Choose "Git" in "In Where do you store your code?"
→ Repository URL: "/home/.../building-a-multibranch-pipeline-project"
→ Save
Blue Ocean will detect the presence of the "Jenkinsfile" stub
in each branch and will run each Pipeline against its respective branch,
STEP 4) Start adding functionality to the Jenkinsfile pipeline
(commit to git once edited)
pipeline {
environment {
docker_caching = 'HOME/.m2:/root/.m2' ← Cache to speed-up builds
docker_ports = '-p 3000:3000 -p 5000:5000' ← Cache to speed-up builds
}
agent {
docker {
image 'node:6-alpine' ← Good Enough to build simple
Node.js+React apps
args '' ← dev/pro port where the app will
listen for requests. Used during
functional testing
}
}
environment {
CI = 'true'
}
stages {
stage('Build') {
steps {
sh 'npm install' ← 1st real build command
}
}
stage('Test') {
steps {
sh './jenkins/scripts/test.sh'
}
}
}
}
STEP 5) Click "run" icon of the master branch of your Pipeline project,
and check the result.
STEP 6) Add "deliver" and "deploy" stages to the Jenkinsfile Pipeline
(and commit changes)
ºJenkins will selectively execute based on the branch that Jenkins is building fromº
+ stage('Deliver for development') {
+ ºwhen {º
+ º branch 'development'º
+ º}º
+ steps {
+ sh './jenkins/scripts/deliver-for-development.sh'
+ input message: 'Finished using the web site? (Click "Proceed" to continue)'
+ sh './jenkins/scripts/kill.sh'
+ }
+ }
+ stage('Deploy for production') {
+ ºwhen {º
+ º branch 'production'º
+ º}º
+ steps {
+ sh './jenkins/scripts/deploy-for-production.sh'
+ input message: 'Finished using the web site? (Click "Proceed" to continue)'
+ sh './jenkins/scripts/kill.sh'
+ }
+ }
Ex Pipeline script
@[https://jenkins.io/doc/pipeline/steps/pipeline-build-step/]
build job: 'Pipeline01FromJenkinsfileAtGit', propagate: true, wait: false
build job: 'Pipeline02FromJenkinsfileAtGit', propagate: true, wait: false
build job: 'Pipeline03FromJenkinsfileAtGit', propagate: true, wait: false
^^^^
result of step is that of downstream build
(success, unstable, failure, not built, or aborted).
false → step succeeds even if the downstream build failed
use result property of the return value as needed.
Serverless Pipeline
• Jenkinsfile-runner: Exec. Jenkinsfile pipeline without running Jenkin server.
@[https://jenkins.io/blog/2019/02/28/serverless-jenkins/]
@[https://github.com/jenkinsci/jenkinsfile-runner]
Jenkins: TODO.Backlog
• AWS EC2 plugin: launch Amazon EC2 Spot Instances as worker nodes
automatically scaling capacity with load demand.
@[https://wiki.jenkins.io/display/JENKINS/Amazon+EC2+Fleet+Plugin]
• "Organization Folders" allows Jenkins to monitor entire
GitHub Organization or BitbucketTeam/Project and automatically create new
Multibranch Pipelines for multi-branch repos by pulling Jenkinsfile's.
• Serverless Jenkins:
@[https://medium.com/@jdrawlings/serverless-jenkins-with-jenkins-x-9134cbfe6870]
• Jenkins + Zuul integration: @[https://zuul-ci.org/]
• Zuul is a python daemon listening for Gerrit stream-events feed and
trigger jobs function registered by Jenkins using the Jenkins Gearman
plugin. triggers are set with YAML and hosted in the git repo.
integration/config.git as /zuul/layout.yaml.
@[https://www.mediawiki.org/wiki/Continuous_integration/Zuul]
· IBM OpenStack Engineer Urges Augmenting Jenkins with Zuul for Hyperscale Projects
@[https://thenewstack.io/ibm-openstack-engineer-urges-cncf-consider-augmenting-jenkins-zuul/]
• Jenkins X: https://jenkins-x.io/es/
Accelerate Continuous Delivery on Kubernetes
· https://github.com/kurron/jx3-k3s-vault
Jenkins X 3.x GitOps repository using k3s to create a kubernetes
cluster, github for the git and container registry and external vault
· Rather than having to have deep knowledge of the internals of
Jenkins X Pipeline, Jenkins X will default awesome pipelines for your
projects that implements fully CI and CD."
· Environment Promotion via GitOps.
· Preview Environments.
· Feedback on Issues and Pull Requests.
Customize History Saving Policy
@[https://stackoverflow.com/questions/60391327/is-it-possible-in-jenkins-to-keep-just-first-and-last-failures-in-a-row-of-con]
Use Case: We are just interesing in keeping "build" changes when the execution
changes from "success execution" to "failure". That's is, if w have a history like:
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14 t15
-----------------------------------------------------------
OK, OK, OK, OK, KO, KO, KO, KO, OK, OK, OK, OK, KO, KO, OK
^ ^ ^ ^ ^
status status status status status
change change change change change
We want to keep history just for:
t1 t5 t9 t13 t15
-----------------------------------------------------------
OK, KO, OK, KO, OK
To allow this history saving policy a groovy job-post-build step is needed:
Ex: discard all successful builds of a job except for the last 3 ones
(since typically, you're more interested in the failed runs)
def allSuccessfulBuilds = manager.build.project.getBuilds().findAll {
it.result?.isBetterOrEqualTo( hudson.model.Result.SUCCESS )
}
allSuccessfulBuilds.drop(3).each {
it.delete()
}
Jenkins TODO
• build-status conditional Post-build actions:
@[https://stackoverflow.com/questions/45456564/jenkins-declarative-pipeline-conditional-post-action]
• Clone directovy (vs full repo) [performance]
@[https://softwaretestingboard.com/q2a/1791/how-clone-checkout-specific-directory-command-line-jenkins]
Ex:
$ git checkout branch_or_version -- path/file
$ git checkout HEAD -- main.c ← checkout main.c from HEAD
$ git checkout e5224c883a...c9 /path/to/directory ← Checkout folder from commit
• Jenkins: managing large git repos [performance]
@[https://jenkins.io/files/2016/jenkins-world/large-git-repos.pdf]
CircleCI Ex
REF:
@[https://github.com/interledger4j/ilpv4-connector/blob/master/.circleci/config.yml]
cat .circleci/config.yml
# Java Maven CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-java/ for more details
#
version: 2
jobs:
# This job builds the entire project and runs all unit tests (specifically the persistence tests) against H2 by
# setting the `spring.datasource.url` value. All Integration Tests are skipped.
build:
working_directory: ~/repo
docker:
# Primary container image where all commands run
- image: circleci/openjdk:8-jdk
environment:
# Customize the JVM maximum heap limit
MAVEN_OPTS: -Xmx4096m
steps:
# apply the JCE unlimited strength policy to allow the PSK 256 bit key length
# solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
- run:
name: Getting JCE unlimited strength policy to allow the 256 bit keys
command: |
curl -L --cookie 'oraclelicense=accept-securebackup-cookie;' http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
unzip -o /tmp/jce_policy.zip -d /tmp
sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar
- checkout # check out source code to working directory
# Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
# https://circleci.com/docs/2.0/caching/
- restore_cache:
keys:
- v1-dependencies-{{ checksum "pom.xml" }}
- run:
name: Full Build (H2)
command: mvn dependency:go-offline -DskipITs install
- save_cache: # saves the project dependencies
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "pom.xml" }}
# save tests
- run:
name: Save test results
command: |
mkdir -p ~/junit/
find . -type f -regex ".*/target/surefire-reports/.*xml" -exec cp {} ~/junit/ \;
mkdir -p ~/checkstyle/
find . -type f -regex ".*/target/checkstyle-reports/.*xml" -exec cp {} ~/junit/ \;
when: always
- store_test_results:
path: ~/junit
- store_artifacts:
path: ~/junit
# publish the coverage report to codecov.io
- run: bash <(curl -s https://codecov.io/bash)
# This job runs specific Ilp-over-HTTP Integration Tests (ITs) found in the `connector-it` module.
# by executing a special maven command that limits ITs to the test-group `IlpOverHttp`.
integration_tests_ilp_over_http:
working_directory: ~/repo
machine:
image: ubuntu-1604:201903-01
environment:
MAVEN_OPTS: -Xmx4096m
JAVA_HOME: /usr/lib/jvm/jdk1.8.0/
steps:
# apply the JCE unlimited strength policy to allow the PSK 256 bit key length
# solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
- run:
name: Getting JCE unlimited strength policy to allow the 256 bit keys
command: |
curl -L --cookie 'oraclelicense=accept-securebackup-cookie;' http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
unzip -o /tmp/jce_policy.zip -d /tmp
sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar
- checkout # check out source code to working directory
# Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
# https://circleci.com/docs/2.0/caching/
- restore_cache:
keys:
- v1-dependencies-{{ checksum "pom.xml" }}
# gets the project dependencies and installs sub-module deps
- run:
name: Install Connector Dependencies
command: mvn dependency:go-offline -DskipTests -DskipITs install
- save_cache: # saves the project dependencies
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "pom.xml" }}
- run:
name: Run Integration Tests (ITs)
command: |
cd ./connector-it
docker network prune -f
mvn verify -Pilpoverhttp
# publish the coverage report to codecov.io
- run: bash <(curl -s https://codecov.io/bash)
# This job runs specific Settlement-related Integration Tests (ITs) found in the `connector-it` module.
# by executing a special maven command that limits ITs to the test-group `Settlement`.
integration_tests_settlement:
working_directory: ~/repo
machine:
image: ubuntu-1604:201903-01
environment:
MAVEN_OPTS: -Xmx4096m
JAVA_HOME: /usr/lib/jvm/jdk1.8.0/
steps:
# apply the JCE unlimited strength policy to allow the PSK 256 bit key length
# solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
- run:
name: Getting JCE unlimited strength policy to allow the 256 bit keys
command: |
curl -L --cookie 'oraclelicense=accept-securebackup-cookie;' http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
unzip -o /tmp/jce_policy.zip -d /tmp
sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar
- checkout # check out source code to working directory
# Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
# https://circleci.com/docs/2.0/caching/
- restore_cache:
keys:
- v1-dependencies-{{ checksum "pom.xml" }}
# gets the project dependencies and installs sub-module deps
- run:
name: Install Connector Dependencies
command: mvn dependency:go-offline -DskipTests -DskipITs install
- save_cache: # saves the project dependencies
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "pom.xml" }}
- run:
name: Run Integration Tests (ITs)
command: |
cd ./connector-it
docker network prune -f
mvn verify -Psettlement
# publish the coverage report to codecov.io
- run: bash <(curl -s https://codecov.io/bash)
# This job runs specific Coordination-related Integration Tests (ITs) found in the `connector-it` module.
# by executing a special maven command that limits ITs to the test-group `Coordination`.
integration_tests_coordination:
working_directory: ~/repo
machine:
image: ubuntu-1604:201903-01
environment:
MAVEN_OPTS: -Xmx4096m
JAVA_HOME: /usr/lib/jvm/jdk1.8.0/
steps:
# apply the JCE unlimited strength policy to allow the PSK 256 bit key length
# solution from http://qiita.com/yoskhdia/items/f4702a3abc4467de69b0
- run:
name: Getting JCE unlimited strength policy to allow the 256 bit keys
command: |
curl -L --cookie 'oraclelicense=accept-securebackup-cookie;' http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip -o /tmp/jce_policy.zip
unzip -o /tmp/jce_policy.zip -d /tmp
sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security/US_export_policy.jar
sudo mv -f /tmp/UnlimitedJCEPolicyJDK8/local_policy.jar $JAVA_HOME/jre/lib/security/local_policy.jar
- checkout # check out source code to working directory
# Restore the saved cache after the first run or if `pom.xml` has changed. Read about caching dependencies:
# https://circleci.com/docs/2.0/caching/
- restore_cache:
keys:
- v1-dependencies-{{ checksum "pom.xml" }}
# gets the project dependencies and installs sub-module deps
- run:
name: Install Connector Dependencies
command: mvn dependency:go-offline -DskipTests -DskipITs install
- save_cache: # saves the project dependencies
paths:
- ~/.m2
key: v1-dependencies-{{ checksum "pom.xml" }}
- run:
name: Run Integration Tests (ITs)
command: |
cd ./connector-it
docker network prune -f
mvn verify -Pcoordination
# publish the coverage report to codecov.io
- run: bash <(curl -s https://codecov.io/bash)
docker_image:
working_directory: ~/repo
machine:
image: ubuntu-1604:201903-01
environment:
MAVEN_OPTS: -Xmx4096m
JAVA_HOME: /usr/lib/jvm/jdk1.8.0/
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "pom.xml" }}
- run:
name: Deploy docker image
command: mvn verify -DskipTests -Pdocker,dockerHub -Dcontainer.version=nightly -Djib.httpTimeout=60000 -Djib.to.auth.username=${DOCKERHUB_USERNAME} -Djib.to.auth.password=${DOCKERHUB_API_KEY}
workflows:
version: 2
# In CircleCI v2.1, when no workflow is provided in config, an implicit one is used. However, if you declare a
# workflow to run a scheduled build, the implicit workflow is no longer run. You must add the job workflow to your
# config in order for CircleCI to also build on every commit.
commit:
jobs:
- build
- integration_tests_ilp_over_http:
requires:
- build
- integration_tests_settlement:
requires:
- build
- integration_tests_coordination:
requires:
- build
nightly:
triggers:
- schedule:
cron: "0 0 * * *"
filters:
branches:
only:
- master
jobs:
- build
- integration_tests_ilp_over_http:
requires:
- build
- integration_tests_settlement:
requires:
- build
- integration_tests_coordination:
requires:
- build
- docker_image:
requires:
- integration_tests_ilp_over_http
- integration_tests_settlement
- integration_tests_coordination
GitHub Actions
Github Actions
https://www.infoq.com/news/2020/02/github-actions-api/
GitHub Actions makes it easy to automate all your software workflows,
now with world-class CI/CD. Build, test, and deploy your code right
from GitHub. Make code reviews, branch management, and issue triaging
work the way you want.
-GitHub Actions API add REST API endpoints for managing artifacts,
secrets, runners, and workflows.
Kayenta Canary Testing
@[https://github.com/spinnaker/kayenta]
- Kayenta platform: Automated Canary Analysis (ACA)
SonarQube (QA)
Apply quality metrics to source-code
Source{d}: Large Scale Code Analysis with IA
@[https://www.linux.com/blog/holberton/2018/10/sourced-engine-simple-elegant-way-analyze-your-code]
- source{d} offers a suite of applications that uses machine learning on code
to complete source code analysis and assisted code reviews. Chief among them
is the source{d} Engine, now in public beta; it uses a suite of open source
tools (such as Gitbase, Babelfish, and Enry) to enable large-scale source
code analysis. Some key uses of the source{d} Engine include language
identification, parsing code into abstract syntax trees, and performing SQL
Queries on your source code such as:
- What are the top repositories in a codebase based on number of commits?
- What is the most recent commit message in a given repository?
- Who are the most prolific contributors in a repository
Charles Proxy
@[https://www.charlesproxy.com/]
• HTTP proxy / HTTP monitor / Reverse Proxy enabling developers to view HTTP+SSL/HTTPS
traffic between loal machine and Internet, including requests, responses and HTTP headers
(which contain the cookies and caching information).
Load Balancer
•ºHTTP balanced proxy Quick Setup with HAProxyº
REF: @[https://github.com/AKSarav/haproxy-nodejs-redis/blob/master/haproxy/]
┌──haproxy/haproxy.cfg ──────────── ┌ haproxy/Dockerfile ──────────────────────────
│ global │ FROM haproxy
│ daemon │ COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
│ maxconn 256 └──────────────────────────────────────────────
│
│ defaults
│ mode http
│ timeout connect 5000ms
│ timeout client 50000ms
│ timeout server 50000ms
│
│ frontend http─in
│ bind *:80 ← Listen on port 80 on all interfaces
│ default_backend servers
│
│ backend servers ← Forward to single backend "servers"
│ server server1 host01:8081 maxconn 32 ← composed of (single server) "server1"
│ at host01:8081
└─────────────────────────────────────────
•ºReverse Proxyº [TODO]
•ºForward Proxyº [TODO]
•ºDNS Recordsº
┌─────────────────────────────────────────────┐
│ A root domain name IP address │
│ Ex: mydomain.com → 1.2.3.4 │
│ Not recomended for changing IPs │
├─────────────────────────────────────────────┤
│ CNAME maps name2 → name1 │
│ Ex: int.mydomain.com → mydomain.com │
├─────────────────────────────────────────────┤
│ Alias Amazon Route 53 virtual record │
│ to map AWS resources like ELBs, │
│ CloudFront, S3 buckets, ... │
├─────────────────────────────────────────────┤
│ MX mail server name → IP address │
│ Ex: smtp.mydomain.com → 1.2.3.4 │
├─────────────────────────────────────────────┤
│ AAAA A record for IPv6 addresses │
└─────────────────────────────────────────────┘
• Summary extracted from @[https://isovalent.com/blog/post/2021-12-08-ebpf-servicemesh]
@[https://www.infoq.com/news/2022/01/ebpf-wasm-service-mesh/]
Service Mesh: Takes care of (netwok)distributed concerns (visibility, security, balancing,
service discovery, ...)
• SERVICE MESH EVOLUTION
1st GENERATION. Each app 2nd Generation. A common 3rd Generation. Sidecar
links against a library. sidecar is used. functionality moved to
linux kernel usinb eBFP
┌─ App1 ────┐ ┌─ App2 ────┐ ┌─ App1 ────┐ ┌─ App2 ────┐
│ ┌───────┐│ │ ┌───────┐│ │ │ │ │
│ │Service││ │ │Service││ └───────────┘ └───────────┘ ┌─ App1 ────┐ ┌─ App2 ────┐
│ │Mesh ││ │ │Mesh ││ ┌───────────┐ ┌───────────┐ │ │ │ │
│ │Library││ │ │Library││ │ServiceMesh│ │ServiceMesh│ └───────────┘ └───────────┘
│ └───────┘│ │ └───────┘│ │SideCar │ │SideCar │ ┌─ Kernel ────────────────┐
└───────────┘ └───────────┘ └───────────┘ └───────────┘ │ ┌─ eBFP Service Mesh ┐ │
┌─ Kernel ────────────────┐ ┌─ Kernel ────────────────┐ │ └────────────────────┘ │
│ ┌─ TCP/IP ─┐ │ │ ┌─ TCP/IP ─┐ │ │ ┌─ TCP/IP ─┐ │
│ └──────────┘ │ │ └──────────┘ │ │ └──────────┘ │
│ ┌─ Network─┐ │ │ ┌─ Network─┐ │ │ ┌─ Network─┐ │
│ └──────────┘ │ │ └──────────┘ │ │ └──────────┘ │
└─────────────────────────┘ └─────────────────────────┘ └─────────────────────────┘
Envoy, Linkerd, Nginx,... Cilium
or kube-proxy
App1 ←→ Kernel TCP/IP App1 ←→ SideCar1 App1 ←→ Kernel eBFP
Kernel TCP/IP ←→ App2 SideCar1 ←→ Kernel TCP/IP Kernel eBFP ←→ App2
Kernel TCP/IP ←→ Sidecar2
Sidecar2 ←→ App2
nginx.conf summary
REF: @[https://raazkumar.com/tutorials/nginx/nginx-conf/]
• nginx == fast HTTP reverse proxy
+ reliable load balancer
+ high performance caching server
+ full-fledged web platform
•ºnginx.conf building blocksº
- worker process : should be equal to number cores of the server (or auto)
- worker connection : 1024 (per thread. nginx doesn't block)
- rate limiting : prevent brute force attacks.
- proxy buffers : (when used as proxy server)limits how much data to store as cache
gzip /brotil or compression
- upload file size : it should match php max upload size and nginx client max body size.
- timeouts : php to nginx communication time.
- log rotation : error log useful to know the errors and monitor resources
- fastcgi cache : very important to boost the performance for static sties.
- SSL Configuration : there are default setting available with nginx itself
(also see ssl performance tuning).
•ºExample nginx.conf:º
user www-data;
load_moduleºmodules/my_favourite_module.so;
pid /run/nginx.pid;
| Alternative global config for
| [4 cores, 8 threads, 32GB RAM]
| handling 50000request/sec
|
worker_processes auto; | worker_processes 8;
| worker_priority -15;
include /etc/nginx/modules-enabled/*.conf; |
worker_rlimit_nofile 100000; | worker_rlimit_nofile 400000;
| timer_resolution 10000ms;
|
events { | events {
worker_connections 1024; | worker_connections 20000;
multi_accept on; | use epoll;
} | multi_accept on;
| }
Bºhttp { ← global configº
index index.php index.html index.htm;
º# Basic Settingsº
sendfile on;
tcp_nopush on;
tcp_nodelay on;
sendfile_max_chunk 512;
keepalive_timeout 300;
keepalive_requests 100000;
types_hash_max_size 2048;
server_tokens off;
server_names_hash_bucket_size 128;
# server_name_in_redirect off;
include /etc/nginx/mime.types; ← ········· types {
default_type application/octet-stream; text/html html htm shtml;
## application/javascript js;
# SSL Settings ...
## }
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
#ssl_prefer_server_ciphers on;
#rate limit zone
limit_req_zone $binary_remote_addr zone=one:10m rate=3r/m;
#buffers
client_body_buffer_size 128k;
client_max_body_size 10m;
client_header_buffer_size 32k;
large_client_header_buffers 16 256k;
output_buffers 1 32k;
postpone_output 1460;
#Porxy buffers
proxy_buffer_size 256k;
proxy_buffers 8 128k;
proxy_busy_buffers_size 256k;
proxy_max_temp_file_size 2048m;
proxy_temp_file_write_size 2048m;
## fast cgi PHP
fastcgi_buffers 8 16k;
fastcgi_buffer_size 32k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
#static caching css/js/img
open_file_cache max=10000 inactive=5m;
open_file_cache_valid 2m;
open_file_cache_min_uses 1;
open_file_cache_errors on;
#timeouts
client_header_timeout 3m;
client_body_timeout 3m;
send_timeout 3m;
# Logging Settings
log_format main_ext ‘$remote_addr – $remote_user [$time_local] “$request” ‘
‘$status $body_bytes_sent “$http_referer” ‘
‘”$http_user_agent” “$http_x_forwarded_for” ‘
‘”$host” sn=”$server_name” ‘
‘rt=$request_time ‘
‘ua=”$upstream_addr” us=”$upstream_status” ‘
‘ut=”$upstream_response_time” ul=”$upstream_response_length” ‘
‘cs=$upstream_cache_status’ ;
access_log /dev/stdout main_ext;
error_log /var/log/nginx/error.log warn; Read more on nginx error log⅋common errors
##
# Gzip Settings #brotil
##
gzip on;
gzip_disable “msie6”;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript ↩
text/xml application/xml application/xml+rss text/javascript ↩
application/x-font-ttf font/opentype image/svg+xml image/x-icon;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Bºserver { ← Domain levelº
listen 0.0.0.0:443 rcvbuf=64000 sndbuf=120000 backlog=20000 ssl http2;
server_name example.com www.example.com;
keepalive_timeout 60;
ssl on;
ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
ssl_ciphers 'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:↩
DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:↩
!aNULL:!MD5:!DSS:!RC4';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:TLSSL:30m;
ssl_session_timeout 10m;
ssl_buffer_size 32k;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
more_set_headers "X-Secure-Connection: true";
add_header Strict-Transport-Security max-age=315360000;
root /var/www;
Bº location { ← Directory levelº
root /var/www;
index index.php index.html;
}
Bº location ~ .php$ {º
fastcgi_keep_conn on;
fastcgi_pass unix:/run/php5.6-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /var/www$fastcgi_script_name;
include fastcgi_params;
fastcgi_intercept_errors off;
fastcgi_buffer_size 32k;
fastcgi_buffers 32 32k;
fastcgi_connect_timeout 5;
}
Bº location ~* ^.+.(jpg|jpeg|gif|png|svg|ico|css|less|xml|html?|swf|js|ttf)$ {º
root /var/www;
expires 10y;
}
}
- /etc/nginx/conf.d/*: user defined config files
See also:
@[https://github.com/trimstray/nginx-admins-handbook]
@[https://github.com/tldr-devops/nginx-common-configuration]
Monitoring for DevOps
Infra vs App Monitoring
•ºInfrastructure Monitoring:º
· Prometheus + Grafana (Opinionated)
Prometheus periodically pulls multidimensional data from different apps/components.
Grafana allows to visualize Prometheus data in custom dashboards.
(Alternatives include Monit, Datadog, Nagios, Zabbix, ...)
•ºApplication Monitoringº:
· OpenTelemetry: replaces OpenTracing and OpenCensus.
Cloud Native Foundation projects.
It also serves as ¿front-end? for Jaeger and others.
· Jaeger, New Relic: (Very opinionated)
(Other alternatives include AppDynamics, Instana, ...)
• Log Management: (Opinionated)
· Elastic Stack
(Alternative include Graylog, Splunk, Papertrail, ...)
Elastic search has evolved throught the years to become a
full analytical platform.
MUCH MORE DETAILED INFORMATION IS AVAILABLE AT:
@[../Architecture/architecture_map.html]
External Links
- User Guide:
@[https://docs.ansible.com/ansible/latest/user_guide/index.html]
- Ansible in practice[Video]
@[https://sysadmincasts.com/episodes/46-configuration-management-with-ansible-part-3-4]
- Playbooks best practices:
@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html]
Ronald Kurr has lot of very-useful and professional Ansible powered code to
provide JVM, Python, Desktops, ... machines. For example:
- Ansible Study Group Labs
@[https://github.com/kurron/ansible-study-group-labs]
- An OpenVPN server in the cloud
https://github.com/kurron/aws-open-vpn/blob/master/ansible/playbook.yml
- Installation of tools than any self-respecting Operation person loves and needs.
https://github.com/kurron/ansible-role-operations/blob/master/tasks/main.yml
- Installation of tools than any self-respecting JVM developer loves and needs.
https://github.com/kurron/ansible-role-jvm-developer/blob/master/tasks/main.yml
- Installation of tools than any self-respecting AWS command-line user loves and needs.
@[https://github.com/kurron/ansible-role-aws/blob/master/tasks/main.yml]
- Connect to a Juniper VPN under Ubuntu.
@[https://github.com/kurron/ansible-role-jvpn/blob/master/tasks/main.yml]
- Installation of tools than any self-respecting Atlassian user loves and needs.
@[https://github.com/kurron/ansible-role-atlassian/blob/master/tasks/main.yml]
- Installation of tools than any self-respecting cross-platform .NET developer loves and needs.
@[https://github.com/kurron/ansible-role-dot-net-developer/blob/master/tasks/main.yml]
- Docker container that launches a pipeline of Docker containers that
ultimately deploy Docker containes via Ansible into EC2 instances
@[https://github.com/kurron/docker-ec2-pipeline]
- Increase operating system limits for Database workloads.
@[https://github.com/kurron/ansible-role-os-limits/blob/master/tasks/main.yml]
- Creation of an Amazon VPC. Public and private subnets are created
in all availability zones.
@[https://github.com/kurron/ansible-role-vpc]
- Command line tools
@[https://docs.ansible.com/ansible/latest/user_guide/command_line_tools.html]
- run a single task 'playbook' against a set of hosts
@[https://docs.ansible.com/ansible/latest/cli/ansible.html]
- ansible-config view, edit, and manage ansible configuration
@[https://docs.ansible.com/ansible/latest/cli/ansible-config.html]
- ansible-console interactive console for executing ansible tasks
@[https://docs.ansible.com/ansible/latest/cli/ansible-console.html]
- manage Ansible roles in shared repostories (default to [https://galaxy.ansible.com] )
@[https://docs.ansible.com/ansible/latest/cli/ansible-galaxy.html]
- display/dump configured inventory:
@[https://docs.ansible.com/ansible/latest/cli/ansible-inventory.html]
@[https://docs.ansible.com/ansible/latest/cli/ansible-pull.html]
ansible-pull pulls playbooks from a VCS repo and executes them for the local host
@[https://docs.ansible.com/ansible/latest/cli/ansible-vault.html]
ansible-vault encryption/decryption utility for Ansible data files
Ansible Summary Bºansible-docº @[https://docs.ansible.com/ansible/latest/cli/ansible-doc.html] @[https://github.com/tldr-pages/tldr/blob/master/pages/common/ansible*] - Display information on modules installed in Ansible libraries. Display a terse listing of plugins and their short descriptions.$ ansible-doc --list \ ← List available action plugins (modules): --type $pluginType (optional) filter by type $ ansible-doc $plugName \ ← Show information for plugin $ --type $pluginType (optional) filter by type $ ansible-doc \ ← Show the playbook snippet for $ --snippet $plugName action plugin (modules) $ --jsonº (optional) dump as JSONBºansible-playbookº @[https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html] Execute tasks defined in playbook over SSH.$ ansible-playbook $playbook \ ← Run tasks in playbook: -i $inventory_file01 \ ← Optional. def /etc/ansible/hosts → ./hosts -i $inventory_file02 \ ← Optional. -e "$var1=val1 $var2=val2" \ ← Optional. Inject env.vars into task execution -e "@$variables.json" \ ← Optional. Inject env.vars into task execution from json --tags $tag1,tag2 \ ← Optional. Run tasks in playbook matching tags. --start-at $task_name \ ← Optional. Run tasks in playbook starting at task. --ask-vault-passº ← alt.1. Ask for secrets interatively (alt.B_1 --vault-password-fileºpassFileº) (alt.B_2 export ANSIBLE_VAULT_PASSWORD_FILE=...) See @[#ansible_handling_secrets] for more info on secret managementBºansible-galaxyº: Create and manage Ansible roles. @[https://docs.ansible.com/ansible/latest/cli/ansible-galaxy.html]$ ansible-galaxy install $username.$role_name ← Install a role $ ansible-galaxy remove $username.$role_name ← Remove a role $ ansible-galaxy list ← List installed roles $ ansible-galaxy search $role_name ← Search for a given role: $ ansible-galaxy init $role_name ← Create a new roleBºansibleº: Manage groups of computers (/etc/ansible/hosts) over SSH$ ansible $group --list-hosts ← List hosts belonging to a group $ ansible $group -m ping ← Ping host group $ ansible $group -m setup ← Display facts about host-group $ ansible $group -m command -a 'command' \ ← Execute a command on host-group $ --become \ ← (Optional) add admin privileges $ -i inventory_file ← (Optional) Use custom inventoryºlayout best practicesº ║ ºControllerº 1 ←→ N ┌─→ ºModuleº (Recommended, non─mandatory) ║ │ best practice file layout approach: ║ ºMachine º │ (community pre─packaged) ──────────────────────────────────────────────── ║ ^ │ ─ abstracts recurrent system task production # inventory file ║─ host with │ ─ Provide the real power of Ansible staging # inventory file ║ installed Ansible │ avoiding custom scripts ║ with modules │ ─ $ ansible─doc "module_name" group_vars/ # ← assign vars. ║ prepakaged ←─────────┘ ─ Ex: # to particular groups. ║ andºconfig.filesº user: name=deploy group=web all.yml # ← Ex: ║ └─┬────────┘ ^ ^ ^ │--- ║ 1) $ANSIBLE_CONFIG module ensure creation of'deploy' │ntp: ntp.ex1.com ║ 2) ./ansible.cfg name account in 'web' group │backup: bk.ex1.com ║ 3) ~/.ansible.cfg (executions are idempotent) ║ 4) /etc/ansible/ansible.cfg webservers.yml # ← Ex: ║ Ex: │--- ║ [defaults] │apacheMaxClients: 900 ║ inventory = hosts │apacheMaxRequestsPerChild: 3000 ║ remote_user = vagrant ║ private_key_file = ~/.ssh/private_key dbservers.yml # ← Ex: ║ host_key_checking = False │--- ║─ "host" inventory file │maxConnectionPool: 100 ║ listing target servers,groups │... ║ ║ host_vars/ ║ Role N ←────→ 1 Playbook 1 ←─────→ N tasks hostname1.yml # ←assign variables ║ ^ ^ ^ hostname2.yml # to particular systems ║ Mechanism to ─ main yaml defining single proc. ║ share files/... task to be executed to execute library/ # (opt) custom modules ║ for reuse *2 ─ Created by DevOps team module_utils/ # (opt) custom module_utils ║@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html] # to support modules ║ filter_plugins/ # (opt) filter plugins ║ºRUN SEQUENCEº ║ |playbook| 1←→N |Play| 1 → apply to → N |Hosts| webservers.yml # ← Ex playbook: ║ ↑ │--- # Map ║ 1 │- hosts: webservers # ← webservers─group ║ └─(contains)→ N |Task| 1→1 |Module| │ # to ║ ┌────────────────────────────────┘ │ roles: # ← roles ║ └→ each task is run in parallel across hosts in order │ - common # ║ waiting until all hosts have completed the task before │ - webtier # ║ moving to the next.(default exec.strategy, can be switched to "free") ║ | - name: .... dbservers.yml # ← Ex playbook for db─tier ║ | hosts: groupTarget01 site.yml #ºmaster playbookº ║ | Oºserial:º # ← Alt1: serial schedule-tunning. │--- (whole infra) ║ | Oº - 1 # ← first in 1 host │# file: site.yml ║ | Oº - "10%" # ← if OK, runs 10% simultaneously │- import_playbook: webservers.yml ║ | Oº - 30 # ← finally 30 hosts in parallel │- import_playbook: dbservers.yml ║ | tasks: ... ║ |#Bºstrategy: freeº ← Alt2: Don't wait for other hosts ║ ºRole layoutº ║º|Playbook Play|º roles/ ║ INPUT ├ webtierRole/ # ← same layout that common ║ |Playbook| → Oºansible─playbookº → Gather ────→ exec tasks │ ... ║ ^ host facts │ ├ monitoringRole/ # ← same layout that common ║ exec tasks on (network, v │ ... ║ the target hostº*1º storage,...) async Handlers ├─common/ # ← Common Role. ║ └────┬────┘ use to: │ ├─tasks/ # ║ Ussually gathered facts service restart, │ │ └─ main.yml # ║ are used for ... │ ├─handlers/ # ║ OºConditionalºInclude. Ex: │ │ └─ main.yml # ║ ... │ ├─templates/ # ║ -Oºincludeº: Redhat.yml │ │ └─ ntp.conf.j2 # ← notice .j2 extension ║ Oºwhenº: ansible_os_family == 'Redhat' │ ├─files/ # ║ Reminder: │ │ ├─ bar.txt # ← input to copy─resource ║@[https://docs.ansible.com/ansible/2.4/playbooks_reuse_includes.html] │ │ └─ foo.sh # ← input to script─resource ║ "include" ← evaluated @ playbook parsing │ ├─vars/ # ║ "import" ← evaluated @ playbook execution │ │ └─ main.yml # ← role related vars ║ "import_playbook"← plays⅋tasks in each playbook │ ├─defaults/ # ║ "include_tasks" │ │ └─ main.yml # ← role related vars ║ "import_tasks" │ │ ← with lower priority ║ │ ├─meta/ # ║ºcommand moduleº │ │ └─ main.yml # ← role dependencies ║─ Ex: │ ├─library/ # (opt) custom modules ║. $ ansible server01 -m command -a uptime │ ├─module_utils/ # (opt) custom module_utils ║ ^^^^^^^^^^ │ └─lookup_plugins/# (opt) a given 'lookup_plugins'║ default module. Can be ommited │ is used ║ testserver │ success │ rc=0 ⅋⅋ ... ║ 17:14:07 up 1:16, 1 user, load average: 0.16, ... ═══════════════════════════════════════════════════╩════════════════════════════════════════════════════════════════ º*1:º@[https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html]
playbook-layout ºTASK vs ROLES PLAYBOOK LAYOUTº ──────────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────────── PLAYBOOK YAML LAYOUT WITHºTASKSº │ PLAYBOOK YAML LAYOUT WITHºROLESº ──────────────────────────────────────────────────────────┼─────────────────────────────────────────────────────────────────── --- │ ºbased on a well known file structureº. - hosts: webservers ← targeted (ssh) servers │ --- connection: ssh ← :=ssh, localhost,. .. │ - name : my list of Task name │ hosts: database vars: ← yaml-file-scoped var.list │ vars_files: - myYmlVar01 : "myVal01" │ - secrets.yml │ enviroment: ← runtime-scoped env.var.list │ Bº# pre_tasks execute before roles º - myEnvVar01 : "myEnv01" │ Bºpre_tasksº: │ - name: update the apt cache tasks: ← ordered task list to │ apt: update_cache=yes be executed │ - name: install apache2 ← task1 │ roles: apt: | │ - role: BºdatabaseRoleº name=apache2 │ # next vars override those in (vars|defaults)/main.yml update_cache=yes │ database_name: " {{ myProject_ddbb_name }}" state=lates │ database_user: " {{ myProject_ddbb_user }}" notify: │ - { role: consumer, when: tag | default('provider') == 'consumer'} - ºrestart-apache2-idº │ - { role: provider, when: tag | default('provider') == 'provider'} - name: next_task_to_exec │ "module": ... │ │ Bº# post_tasks execute after roles º handlers: ← tasks triggered by events │ Bºpost_tasksº: - name: restart-apache2 ← ºname as a Unique-IDº │ - name: notify Slack service: name=apache2 state=restarted │ local_action: ˃ │ slack - hosts: localhost │ domain=acme.slack.com connection: local │ token={{ slack_token }} gather_facts: False │ msg="database {{ inventory_hostname }} configured" │ vars: │ ... ... │ =========================== │ roles search path: ./roles → /etc/ansible/roles │ role file layout: │ roles/B*databaseRole*/tasks/main.yml │ roles/B*databaseRole*/files/ │ roles/B*databaseRole*/templates/ │ roles/B*databaseRole*/handlers/main.yml │ roles/B*databaseRole*/vars/main.yml # should NO be overrriden │ roles/B*databaseRole*/defaults/main.yml # can be overrriden │ roles/B*databaseRole*/meta/main.yml # dependency info about role ──────────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────── - hosts: web_servers tasks: - shell: /usr/bin/foo Oºregisterº:ºfoo_resultº ←OºSTDOUT exec ouput to ansible varº ignore_errors: True Json schema output depends on module STDOUT to .rc in case. - shell: /usr/bin/bar Use -v on each module to investigate when: ºfoo_resultº.rc == 5 Error Handling[qa] - default behavior: - take a host out of the play if a task fails and continue with the other hosts. -Oºserialº, Oºmax_fail_percentageº can be used to define a playbook-play as failed. @[https://docs.ansible.com/ansible/2.5/user_guide/playbooks_delegation.html#maximum-failure-percentage] - Using 'block' (task grouping) inside tasks: - hosts: app-servers Oºmax_fail_percentage:º"10%" ← abort if surpassed. tasks: - name: Take VM out of the load balancer - name: Create a VM snapshot before the app upgrade - block: ← scope error/recovery/rollback - name: Upgrade the application - name: Run smoke tests ºrescue:º - name: Revert a VM to the snapshot after a failed upgrade ºalways:º - name: Re-add webserver to the loadbalancer - name: Remove a VM snapshot
inventory file - Defaults to: /etc/ansible/hosts - if marked as executable (+x) it's executed and the json-output taken as effective-inventory. - script must then support '--host=' and '--list=' flags Ex: hosts inventory file ┌─→ Ex: test("ssh─ping") host in inventory ───────────────────────── │ using 'ping' module: Gºdevelopmentº ←─────────┘ $ ansible -i ./hostsº-m pingº Gºdevelopmentº Oºproductionº [all:vars] group patterns ntp_server=ntp.ubuntu.com Other patterns:A All hosts Oºallº [Oºproductionº:vars] All Oºº* db_primary_host=rhodeisland.example.com Union devOº:ºstaging db_replica_host=virginia.example.com Intersection stagingOº:⅋ºdatabase db_name=widget_production Exclusion devOº:!ºqueue rabbitmq_host=pennsylvania.example.com Wildcard Oºº*.example.com Range webOº[5:10]º [Gºdevelopmentº:vars] Regex O*~web\d+\.example\.(com|org)* db_primary_host=quebec.example.com db_name=widget_staging rabbitmq_host=quebec.example.com [Gºvagrantº:vars] db_primary_host=vagrant3 db_name=widget_vagrant rabbitmq_host=vagrant3 [Gºvagrantº] Gºvagrant1 ansible_host=127.0.0.1 ansible_port=2222º Gºvagrant2 ansible_host=127.0.0.1 ansible_port=2200º [web_group01] Oºgeorgia.example.comº Oºnewhampshire.example.comº Oºnewjersey.example.comº Gºvagrant1º [rabbitmq] Oºpennsylvania.example.comº Gºvagrant2º [django:children] ← Group of groups web_group01 rabbitmq [web_group02] web_group01[01:20].example.com ← ranges web-[a-t].example.com ←
variable "scopes" Playbook Variable Main Scopes -ºGlobal:ºset by config, ENV.VARS and cli -ºPlay :ºeach play and contained structures, vars|vars_files|vars_prompt entries role defaults -ºHost :ºdirectly associated to a host, like inventory, include_vars, facts or registered task outputs Variable scope Overrinding rules: - The more explicit you get in scope, the more precedence 1 command line values (eg “-u user”) º(SMALLEST PRECEDENCE)º 2 role defaults 3 *1 inventory file || script group vars 4 *2 inventory group_vars/all 5 *2 playbook group_vars/all 6 *2 inventory group_vars/* 7 *2 playbook group_vars/* 8 *1 inventory file or script host vars 9 *2 inventory host_vars/* 10 *2 playbook host_vars/* 11 *4 host facts || cached set_facts 12 play vars 13 play vars_prompt 14 play vars_files 15 role vars (defined in role/vars/main.yml) 16 block vars (only for tasks in block) 17 task vars (only for the task) 18 include_vars 19 set_facts || registered vars 20 role (and include_role) params 21 include params 22 (-e) extra vars º(BIGEST PRECEDENCE)º ↑ *1 Vars defined in inventory file or dynamic inventory *2 Includes vars added by ‘vars plugins’ as well as host_vars and group_vars which are added by the default vars plugin shipped with Ansible. *4 When created with set_facts’s cacheable option, variables will have the high precedence in the play, but will be the same as a host facts precedence when they come from the cache.
Ad-hoc command
@[https://www.howtoforge.com/ansible-guide-ad-hoc-command/]
- Ad-Hoc allows to perform tasks without creating a playbook
first, such as rebooting servers, managing services, editing the line
configuration, copy a file to only one host, install only one package.
- An Ad-Hoc command will only have two parameters, the group of a host
that you want to perform the task and the Ansible module to run.
Must-know Modules
1) Package management
- module for major package managers (DNF, APT, ...)
- install, upgrade, downgrade, remove, and list packages.
- dnf_module
- yum_module (required for Python 2 compatibility)
- apt_module
- slackpkg_module
- Ex:
|- name: install Apache,MariaDB
| dnf: # ← dnf,yum,
| name:
| - httpd
| - mariadb-server
| state: latest # ← !=latest|present|...
2) 'service' module
- start, stop, and reload installed packages;
- Ex:
|- name: Start service foo, based on running process /usr/bin/foo
| service:
| name: foo
| pattern: /usr/bin/foo
| state: started # ← started|restarted|...
| args: arg0value
3) 'copy' module
- copies file: local_machine → remote_machine
|- name: Copy a new "ntp.conf file into place,
| copy:
| src: /mine/ntp.conf
| dest: /etc/ntp.conf
| owner: root
| group: root
| mode: '0644' # or u=rw,g=r,o=r
| backup: yes # back-up original if different to new
4) 'debug' module (print values to STDOUT/file during execution)
|- name: Display all variables/facts known for a host
| debug:
| var: hostvars[inventory_hostname]
| verbosity: 4
| dest: /tmp/foo.txt # ← By default to STDOUT
| verbosity: 2 # ← optional. Display only with
$ ansible-playbook demo.yamlº-vvº
5) 'file' module: manage file and its properties.
- set attributes of files, symlinks, or directories.
- removes files, symlinks, or directories.
- Ex:
|- name: Change file ownership/group/perm
| file:
| path: /etc/foo # ← create if needed
| owner: foo
| group: foo
| mode: '0644'
| state: file ← file*|directory|...
6) 'lineinfile' module
- ensures that particular line is in file
- replaces existing line using regex.
- Ex:
|- name: Ensure SELinux is set to enforcing mode
| lineinfile:
| path: /etc/selinux/config
| regexp: '^SELINUX=' # ← (optional) creates if not found.
| line: SELINUX=enforcing # new value, do nothing if found
7) 'git' module
- manages git checkouts of repositories to deploy files or software.
- Ex: Create git archive from repo
|- git:
| repo: https://github.com/ansible/ansible-examples.git
| dest: /src/ansible-examples
| archive: /tmp/ansible-examples.zip
8) 'cli_config'
- platform-agnostic way of pushing text-based configurations
to network devices
- Ex1:
| - name: commit with comment
| cli_config:
| config: set system host-name foo
| commit_comment: this is a test
- Ex2:
set switch-hostname and exits with a commit message.
|- name: configurable backup path
| cli_config:
| config: "{{ lookup('template', 'basic/config.j2') }}"
| backup: yes
| backup_options:
| filename: backup.cfg
| dir_path: /home/user
9) 'archive' module
- create compressed archive of 1+ files.
- Ex:
|- name: Compress directory /path/to/foo/ into /path/to/foo.tgz
| archive:
| path:
| - /path/to/foo
| - /path/wong/foo
| dest: /path/to/foo.tar.bz2
| format: bz2
10) Command
- takes the command name followed by a list of space-delimited arguments.
Ex1:
- name: return motd to registered var
command: cat /etc/motd .. ..
become: yes # ← "sudo"
become_user: db_owner # ← effective user
register: mymotd # ← STDOUT to Ansible var mymotd
args: # (optional) command-module args
# (vs executed command arguments)
chdir: somedir/ # ← change to dir
creates: /etc/a/b # ← Execute command if path doesn't exists
@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html] host fact → Play Vars - UsingOºsetupº module at play/run-time. Ex: tasks: - ... - name: re-read facts after adding custom fact Bºsetup:ºfilter=ansible_local ← re-run Bºsetup moduleº $ ansible targetHost01 -m Oºsetupº (Output will be similar to) Next facts are available with: - hosts: ... Bºgather_facts: yesº ← Will execute the module "setup" { Bº"ansible_os_family": "Debian", º Bº"ansible_pkg_mgr": "apt", º Bº"ansible_architecture": "x86_64",º b*"ansible_nodename": "ubuntu2.example.com", "ansible_all_ipv4_addresses": [ "REDACTED IP ADDRESS" ], "ansible_all_ipv6_addresses": [ "REDACTED IPV6 ADDRESS" ], "ansible_bios_date": "09/20/2012", ... "ansible_date_time": { "date": "2013-10-02", ... }, Oº"ansible_default_ipv4": {º Oº ... º Oº}, º ... "ansible_devices": { "sda": { "partitions": { ... Oº"size": "19.00 GB",º }, ... }, ... }, ... "ansible_env": { "HOME": "/home/mdehaan", Oº"PWD": "/root/ansible",º Oº"SHELL": "/bin/bash",º ... }, Oº"ansible_fqdn": "ubuntu2.example.com",º Oº"ansible_hostname": "ubuntu2",º ... "ansible_processor_cores": 1, "ansible_ssh_host_key_dsa_public": ... ... } /etc/ansible/facts.d (Local provided facts, 1.3+) Way to provide "locally supplied user values" as opposed to "centrally supplied user values" or "locally dynamically determined values" If any files inside /etc/ansible/facts.d (@remotely managed host) ending in *.fact (JSON, INI, execs generating JSON, ...) can supply local facts Ex: /etc/ansible/facts.d/preferences.fact contains: [general] asdf=1 ← Will be available as {{ ansible_local.preferences.general.asdf }} bar=2 (keys are always converted to lowercase) To copy local facts and make the usable in current play: - hosts: webservers tasks: - name: create directory for ansible custom facts file: state=directory recurse=yes path=/etc/ansible/facts.d - name: install custom ipmi fact copy: src=ipmi.fact dest=/etc/ansible/facts.d ← Copy local facts - name: re-read facts after adding custom fact Bºsetup:ºfilter=ansible_local ← re-run Bºsetup moduleº to make ← locals facts available in current play
Lookups: Query ext.data: file sh KeyValDB .. @[https://docs.ansible.com/ansible/latest/user_guide/playbooks_lookups.html] ... vars: motd_value: "{{Oºlookupº(Bº'file'º, '/etc/motd') }}" ^^^^^^ ^^^^ Use lookup One of: modules - file - password - pipe STDOUT of local exec. - env ENV.VAR. - template j2 tpl evaluation - csvfile Entry in .csv file - dnstxt - redis_kv Redis key lookup - etcd etcd key lookup
"Jinja2" template ex.
Bºnginx.conf.j2º
server {
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
listen 443 ssl;
root /usr/share/nginx/html;
index index.html index.htm;
server_name º{{º server_name º}}º;
ssl_certificate º{{º cert_file º}}º;
ssl_certificate_key º{{º key_file º}}º;
location / {
try_files $uri $uri/ =404;
}
}
Bºtemplates/default.conf.tplº
templates/000_default.conf.tpl
|˂VirtualHost *:80˃
| ServerAdmin webmaster@localhost
| DocumentRoot {{ doc_root }}
|
| ˂Directory {{ doc_root }}˃
| AllowOverride All
| Require all granted
| ˂/Directory˃
|˂/VirtualHost˃
Task:
| - name: Setup default virt.host
| template: src=templates/default.conf.tpl dest=/etc/apache2/sites-available/000-default.conf
Bº(j2) filtersº
Oº|º must be interpreted as the "pipe" (input) to filter, not the "or" symbol.
# default if undefined:
- ...
"HOST": "{{ database_host Oº| default('localhost')º }}"
# fail after some debuging
- ...
register: result
Oºignore_errors: Trueº
...
failed_when: resultOº| failedº
...
Oºfailed º True if registered value is a failed task
Oºchangedº True if registered value is a changed task
Oºsuccessº True if registered value is a succeeded task
Oºskippedº True if registered value is a skipped task
Bºpath filtersº
Oºbasename º
Oºdirname º
Oºexpanduserº '~' replaced by home dir.
Oºrealpath º resolves sym.links
Ej:
vars:
homepage: /usr/share/nginx/html/index.html
tasks:
- name: copy home page
copy: ˂
src={{ homepage Oº| basenameº }}
dest={{ homepage }}
BºCustom filtersº
filter_plugins/surround_by_quotes.py
# From http://stackoverflow.com/a/15515929/742
def surround_by_quote(a_list):
return ['"%s"' % an_element for an_element in a_list]
class FilterModule(object):
def filters(self):
return {'surround_by_quote': surround_by_quote}
notify vs register
@[https://stackoverflow.com/questions/33931610/ansible-handler-notify-vs-register]
some tasks ... | some tasks ...
ºnotify:ºnginx_restart | ºregister:ºnginx_restart
|
# our handler | # do this after nginx_restart changes
- name: nginx_restart | ºwhen:ºnginx_restart|changed
^^^^^^^^^^^^^
- only fired when
tasks report changes
- only visible in playbook ← With register task is displayed as skipped
if actually executed. if 'when' condition is false.
- can be called from any
role.
- (by default) executed at
the end of the playbook.
RºThis can be dangerousºif playbook
fails midway, handler is NOT
notified. Second run can ignore
the handle since task could have
not changed now. Actually it will
RºNOT be idempotentº (unless
--force-handler is set )
- To fire at specific point flush
all handlers by defining a task like:
- meta: flush_handlers
- called only once no matter how many
times it was notified.
Handling secrets
Bºansible-vaultº: En/de-crypts values/data structures/files
@[https://github.com/tldr-pages/tldr/blob/master/pages/common/ansible-vault.md]
@[https://docs.ansible.com/ansible/latest/user_guide/vault.html#id17]
$º$ ansible-vault create $vault_file º ← Create new encrypted vault file with
a prompt for a password.
$º$ ansible-vault create \ º ← Create new encrypted vault file
$º --vault-password-file=$pass_file \ º using a vault key file to encrypt it
$º $vault_file º
$º$ ansible-vault encrypt \ º ← Encrypt existing file using optional
$º --vault-password-file=$pass_file \ º password file
$º $vault_file º
$º$ ansible-vault encrypt_string º ← Encrypt string using Ansible's encrypted
string format, interactively
$º$ ansible-vault view \ º ← View encrypted file, using pass.file
$º --vault-password-file={{password_file}} \º to decrypt
$º $vault_file º
$º$ ansible-vault rekey \ º ← Re-key already encrypted vault file
$º --vault-password-file=$old_password_file º with new password file
$º --new-vault-password-file=$new_pass_file º
$º $vault_file º
@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_vault.html]
- Ansible vaults use symetric-chiper encryption
INPUT ENCRYPTING OUTPUT Ussage
COMMAND (can be added to SCM) (Play.Execution)
────────────── ───────────── ──────────────────── ──────────────────
external pass─┐ ┌→ $ ansible─vault \ (alt1)→ protectedPB.yml ──┬→ $ ansible-playbook protectedPB.yml \ *1
│ │ create protectedPB.yml │ º--ask-vault-passº ← alt.A
secret needed─┤(alt1) │ º--vault-password-fileºpassFileº ← alt.B_1
at playbook └──┤ │ ^^^^^^^^
execution │ │ content/exec.STDOUT
│ │ should be a single-line-string
│ │ export ANSIBLE_VAULT_PASSWORD_FILE=... ← alt.B_2
(alt.2) │
│ │
│ │
└→ $ ansible-vault \ (alt2)→ yml to be embeded ──┘
encrypt_string into existing playbook
Ex:
→ mySecretToEncrypt
→ bla bla blah(Ctrl+D)→ !vault ← C⅋P to a yml file:
→ $ANSIBLE_VAULT;1.1;AES256 - vars:
→ 66386439653236336462... - secret01: !vault |
→ 64316265363035303763... $ANSIBLE_VAULT;1.1;AES256
→ ... 66386439653236336462...
*1: RºWARN:º Currently requires all files to be encrypted with same password
Ex (yum install) apache@localhost ┌ --- │ # file: ansible.yml │ - hosts: localhost │ connection: local │ gather_facts: False │ │ vars: │ var_yum_prerequisites: [ 'httpd24' , 'vim', 'tmux' ] │ var_apt_prerequisites: [ 'apache-server', 'vim', 'tmux' ] │ │ vars_files: │ - /vars/vars_not_in_git.yml ← add to .gitignore │ avoid sharing sensitive data │ /vars/vars_not_in_git.yml will look like: │ password: !vault | │ $ANSIBLE_VAULT;1.1;AES256 │ ... │ │ tasks: │ - name: install yum pre-requisites │ when: ansible_os_family == "RedHat" │ become: true │ yum: │ name: {{ var_yum_prerequisites }} │ state: present │ notify: │ - restart-apache2 │ │ - name: install apt pre-requisites │ when: ansible_os_family == "Debian" │ become: true │ apt: │ name: {{ var_apt_prerequisites }} │ state: latest │ notify: │ - restart-apache2 │ │ │ handlers: │ - name: restart-apache2 └ service: name=httpd state=restarted
Ex: Installing nginx
┌ web-tls.yml
│ - name: wait in control host for ssh server to be running
│ local_action: wait_for port=22 host="{{ inventory_hostname }}"
│ search_regex=OpenSSH
│
│ - name: Configure nginx
│ ºhosts:º webservers
│ become: True
│ ºvars:º
│ Oºkey_fileº: /etc/nginx/ssl/nginx.key
│ Gºcert_fileº: /etc/nginx/ssl/nginx.crt
│ Bºconf_fileº: /etc/nginx/sites-available/default
│ server_name: localhost
│ ºtasks:º
│ - name: install nginx
│ ºaptº: ºnameº=nginx ºupdate_cacheº=yes
│
│ - name: create directories for ssl certificates
│ ºfileº: ºpathº=/etc/nginx/ssl ºstateº=directory
│
│ - name: copy TLS key
│ ºcopyº: ºsrcº=files/nginx.key ºdestº={{ Oºkey_fileº }} owner=root ºmodeº=0600
│ ºnotifyº: restart nginx
│
│ - name: copy TLS certificate
│ ºcopyº: ºsrcº=files/nginx.crt ºdestº={{ Gºcert_fileº }}
│ ºnotifyº: restart nginx
│
│ - name: copy config file
│ ºcopyº: ºsrcº=files/nginx.confº.j2º ºdestº={{ Bºconf_fileº }}
│
│ - name: enable configuration
│ # set attributes of file, symlink or directory
│ ºfileº: ºdestº=/etc/nginx/sites-enabled/default ºsrcº={{ Bºconf_fileº }} state=link
│ - name: copy index.html
│ # template → new file → remote host
│ ºtemplateº: ºsrcº=templates/index.html.j2 ºdestº=/usr/share/nginx/html/index.html
│ mode=0644
│
│ - name: show a debug message
│ debug: "msg='Example debug message: conf_file {{ Bºconf_fileº }} included!'"
│
│ - name: Example to register new ansible variable
│ command: whoami
│ register: login
│ # (first debug helps to know who to write the second debug)
│ - debug: var=login
│ - debug: msg="Logged in as user {{ login.stdout }}"
│
│ - name: Example to ºignore errorsº
│ command: /opt/myprog
│ register: result
│ ignore_errors: ºTrueº
│ - debug: var=result
│
│ ºhandlers:º
│ - name: restart nginx
└ ºserviceº: ºnameº=nginx ºstateº=restarted
Insanely complet Ansible playbook
@[https://gist.github.com/marktheunissen/2979474]
--- ← YAML documents must begin with doc.separator "---"
####
#### descriptive comment at the top of my playbooks.
####
#
# Overview: Playbook to bootstrap a new host for configuration management.
# Applies to: production
# Description:
# Ensures that a host is configured for management with Ansible.
###########
#
# Note:
# RºYAML, like Python, cares about whitespaceº:BºIndent consistentlyº .
# Be aware! Unlike Python, YAML refuses to allow the tab character for
# indentation, so always use spaces.
#
# Two-space indents feel comfortable to me, but do whatever you like.
# vim:ff=unix ts=2 sw=2 ai expandtab
#
# If you're new to YAML, keep in mind that YAML documents, like XML
# documents, represent a tree-like structure of nodes and text. More
# familiar with JSON? Think of YAML as a strict and more flexible JSON
# with fewer significant characters (e.g., :, "", {}, [])
#
# The curious may read more about YAML at:
# http://www.yaml.org/spec/1.2/spec.html
#
###
# Notice the minus on the line below -- this starts the playbook's record
# in the YAML document. Only one playbook is allowed per YAML file. Indent
# the body of the playbook.
-
hosts: all
###########
# Playbook attribute: hosts
# Required: yes
# Description:
# The name of a host or group of hosts that this playbook should apply to.
#
## Example values:
# hosts: all -- applies to all hosts
# hosts: hostname -- apply ONLY to the host 'hostname'
# hosts: groupname -- apply to all hosts in groupname
# hosts: group1,group2 -- apply to hosts in group1 ⅋ group2
# hosts: group1,host1 -- mix and match hosts
# hosts: *.mars.nasa.gov wildcard matches work as expected
#
## Using a variable value for 'hosts'
#
# You can, in fact, set hosts to a variable, for example:
#
# hosts: $groups -- apply to all hosts specified in the variable $groups
#
# This is handy for testing playbooks, running the same playbook against a
# staging environment before running it against production, occasional
# maintenance tasks, and other cases where you want to run the playbook
# against just a few systems rather than a whole group.
#
# If you set hosts as shown above, then you can specify which hosts to
# apply the playbook to on each run as so:
#
# ansible-playbook playbook.yml --extra-vars="groups=staging"
#
# Use --extra-vars to set $groups to any combination of groups, hostnames,
# or wildcards just like the examples in the previous section.
#
sudo: True
###########
# Playbook attribute: sudo
# Default: False
# Required: no
# Description:
# If True, always use sudo to run this playbook, just like passing the
# --sudo (or -s) flag to ansible or ansible-playbook.
user: remoteuser
###########
# Playbook attribute: user
# Default: "root'
# Required: no
# Description
# Remote user to execute the playbook as
###########
# Playbook attribute: vars
# Default: none
# Required: no
# Description:
# Set configuration variables passed to templates ⅋ included playbooks
# and handlers. See below for examples.
vars:
color: brown
web:
memcache: 192.168.1.2
httpd: apache
# Tree-like structures work as expected, but be careful to surround
# the variable name with ${} when using.
#
# For this example, ${web.memcache} and ${web.apache} are both usable
# variables.
########
# The following works in Ansible 0.5 and later, and will set $config_path
# "/etc/ntpd.conf" as expected.
#
# In older versions, $config_path will be set to the string "/etc/$config"
#
config: ntpd.conf
config_path: /etc/$config
########
# Variables can be set conditionally. This is actually a tiny snippet
# of Python that will get filled in and evaluated during playbook execution.
# This expressioun should always evaluate to True or False.
#
# In this playbook, this will always evaluate to False, because 'color'
# is set to 'brown' above.
#
# When ansible interprets the following, it will first expand $color to
# 'brown' and then evaluate 'brown' == 'blue' as a Python expression.
is_color_blue: "'$color' == 'blue'"
#####
# Builtin Variables
#
# Everything that the 'setup' module provides can be used in the
# vars section. Ansible native, Facter, and Ohai facts can all be
# used.
#
# Run the setup module to see what else you can use:
# ansible -m setup -i /path/to/hosts.ini host1
main_vhost: ${ansible_fqdn}
public_ip: ${ansible_eth0.ipv4.address}
# vars_files is better suited for distro-specific settings, however...
is_ubuntu: "'${ansible_distribution}' == 'ubuntu'"
##########
# Playbook attribute: vars_files
# Required: no
# Description:
# Specifies a list of YAML files to load variables from.
#
# Always evaluated after the 'vars' section, no matter which section
# occurs first in the playbook. Examples are below.
#
# Example YAML for a file to be included by vars_files:
# ---
# monitored_by: phobos.mars.nasa.gov
# fish_sticks: "good with custard"
# # (END OF DOCUMENT)
#
# A 'vars' YAML file represents a list of variables. Don't use playbook
# YAML for a 'vars' file.
#
# Remove the indentation ⅋ comments of course, the '---' should be at
# the left margin in the variables file.
#
vars_files:
# Include a file from this absolute path
- /srv/ansible/vars/vars_file.yml
# Include a file from a path relative to this playbook
- vars/vars_file.yml
# By the way, variables set in 'vars' are available here.
- vars/$hostname.yml
# It's also possible to pass an array of files, in which case
# Ansible will loop over the array and include the first file that
# exists. If none exist, ansible-playbook will halt with an error.
#
# An excellent way to handle platform-specific differences.
- [ vars/$platform.yml, vars/default.yml ]
# Files in vars_files process in order, so later files can
# provide more specific configuration:
- [ vars/$host.yml ]
# Hey, but if you're doing host-specific variable files, you might
# consider setting the variable for a group in your hosts.ini and
# adding your host to that group. Just a thought.
##########
# Playbook attribute: vars_prompt
# Required: no
# Description:
# A list of variables that must be manually input each time this playbook
# runs. Used for sensitive data and also things like release numbers that
# vary on each deployment. Ansible always prompts for this value, even
# if it's passed in through the inventory or --extra-vars.
#
# The input won't be echoed back to the terminal. Ansible will always
# prompt for the variables in vars_prompt, even if they're passed in via
# --extra-vars or group variables.
#
# TODO: I think that the value is supposed to show as a prompt but this
# doesn't work in the latest devel
#
vars_prompt:
passphrase: "Please enter the passphrase for the SSL certificate"
# Not sensitive, but something that should vary on each playbook run.
release_version: "Please enter a release tag"
##########
# Playbook attribute: tasks
# Required: yes
# Description:
# A list of tasks to perform in this playbook.
tasks:
##########
# The simplest task
# Each task must have a name ⅋ action.
- name: Check that the server's alive
action: ping
##########
# Ansible modules do the work!
- name: Enforce permissions on /tmp/secret
action: file path=/tmp/secret mode=0600 owner=root group=root
#
# Format 'action' like above:
# modulename module_parameters
#
# Test your parameters using:
# ansible -m $module -a "$module_parameters"
#
# Documentation for the stock modules:
# http://ansible.github.com/modules.html
##########
# Use variables in the task!
#
# Variables expand in both name and action
- name: Paint the server $color
action: command echo $color
##########
# Trigger handlers when things change!
#
# Ansible detects when an action changes something. For example, the
# file permissions change, a file's content changed, a package was
# just installed (or removed), a user was created (or removed). When
# a change is detected, Ansible can optionally notify one or more
# Handlers. Handlers can take any action that a Task can. Most
# commonly they are used to restart a service when its configuration
# changes. See "Handlers" below for more about handlers.
#
# Handlers are called by their name, which is very human friendly.
# This will call the "Restart Apache" handler whenever 'copy' alters
# the remote httpd.conf.
- name: Update the Apache config
action: copy src=httpd.conf dest=/etc/httpd/httpd.conf
notify: Restart Apache
# Here's how to specify more than one handler
- name: Update our app's configuration
action: copy src=myapp.conf dest=/etc/myapp/production.conf
notify:
- Restart Apache
- Restart Redis
##########
# Include tasks from another file!
#
# Ansible can include a list of tasks from another file. The included file
# must represent a list of tasks, which is different than a playbook.
#
# Task list format:
# ---
# - name: create user
# action: user name=$user color=$color
#
# - name: add user to group
# action: user name=$user groups=$group append=true
# # (END OF DOCUMENT)
#
# A 'tasks' YAML file represents a list of tasks. Don't use playbook
# YAML for a 'tasks' file.
#
# Remove the indentation ⅋ comments of course, the '---' should be at
# the left margin in the variables file.
# In this example $user will be 'sklar'
# and $color will be 'red' inside new_user.yml
- include: tasks/new_user.yml user=sklar color=red
# In this example $user will be 'mosh'
# and $color will be 'mauve' inside new_user.yml
- include: tasks/new_user.yml user=mosh color=mauve
# Variables expand before the include is evaluated:
- include: tasks/new_user.yml user=chris color=$color
##########
# Run a task on each thing in a list!
#
# Ansible provides a simple loop facility. If 'with_items' is provided for
# a task, then the task will be run once for each item in the 'with_items'
# list. $item changes each time through the loop.
- name: Create a file named $item in /tmp
action: command touch /tmp/$item
with_items:
- tangerine
- lemon
##########
# Choose between files or templates!
#
# Sometimes you want to choose between local files depending on the
# value of the variable. first_available_file checks for each file
# and, if the file exists calls the action with $item={filename}.
#
# Mostly useful for 'template' and 'copy' actions. Only examines local
# files.
#
- name: Template a file
action: template src=$item dest=/etc/myapp/foo.conf
first_available_file:
# ansible_distribution will be "ubuntu", "debian", "rhel5", etc.
- templates/myapp/${ansible_distribution}.conf
# If we couldn't find a distribution-specific file, use default.conf:
- templates/myapp/default.conf
##########
# Conditionally execute tasks!
#
# Sometimes you only want to run an action when a under certain conditions.
# Ansible will 'only_if' as a Python expression and will only run the
# action when the expression evaluates to True.
#
# If you're trying to run an task only when a value changes,
# consider rewriting the task as a handler and using 'notify' (see below).
#
- name: "shutdown all ubuntu"
action: command /sbin/shutdown -t now
only_if: "$is_ubuntu"
- name: "shutdown the government"
action: command /sbin/shutdown -t now
only_if: "'$ansible_hostname' == 'the_government'"
##########
# Notify handlers when things change!
#
# Each task can optionally have one or more handlers that get called
# when the task changes something -- creates a user, updates a file,
# etc.
#
# Handlers have human-readable names and are defined in the 'handlers'
# section of a playbook. See below for the definitions of 'Restart nginx'
# and 'Restart application'
- name: update nginx config
action: file src=nginx.conf dest=/etc/nginx/nginx.conf
notify: Restart nginx
- name: roll out new code
action: git repo=git://codeserver/myapp.git dest=/srv/myapp version=HEAD branch=release
notify:
- Restart nginx
- Restart application
##########
# Run things as other users!
#
# Each task has an optional 'user' and 'sudo' flag to indicate which
# user a task should run as and whether or not to use 'sudo' to switch
# to that user.
- name: dump all postgres databases
action: pg_dumpall -w -f /tmp/backup.psql
user: postgres
sudo: False
##########
# Run things locally!
#
# Each task also has a 'connection' setting to control whether a local
# or remote connection is used. The only valid options now are 'local'
# or 'paramiko'. 'paramiko' is assumed by the command line tools.
#
# This can also be set at the top level of the playbook.
- name: create tempfile
action: dd if=/dev/urandom of=/tmp/random.txt count=100
connection: local
##########
# Playbook attribute: handlers
# Required: no
# Description:
# Handlers are tasks that run when another task has changed something.
# See above for examples. The format is exactly the same as for tasks.
# Note that if multiple tasks notify the same handler in a playbook run
# that handler will only run once.
#
# Handlers are referred to by name. They will be run in the order declared
# in the playbook. For example: if a task were to notify the
# handlers in reverse order like so:
#
# - task: touch a file
# action: file name=/tmp/lock.txt
# notify:
# - Restart application
# - Restart nginx
#
# The "Restart nginx" handler will still run before the "Restart
# application" handler because it is declared first in this playbook.
handlers:
- name: Restart nginx
action: service name=nginx state=restarted
# Any module can be used for the handler action
- name: Restart application
action: command /srv/myapp/restart.sh
# It's also possible to include handlers from another file. Structure is
# the same as a tasks file, see the tasks section above for an example.
- include: handlers/site.yml
Troubleshooting
Problem ex:
'django_manage' mondule always returns 'changed: False' for
some "external" ddbb commands.
(ºnonºidempotent task)
Solution:
Oº'changed_when'/'failed_when'º provides hints to Ansible at play time:
- name: init-database
django_manage:
command: createdb --noinput --nodata
app_path: "{{ proj_path }}"
virtualenv: "{{ venv_path }}"
Oºfailed_whenº: False # ← avoid stoping execution
register:Gºresultº
Oºchanged_when:º Gºresult.outº is defined and '"Creating tables" in Gºresult.outº'
- debug: var=result
- fail:
Dynamic Inventory
@[https://docs.ansible.com/ansible/2.5/user_guide/intro_dynamic_inventory.html]
(EC2, OpenStack,...)
Fact Caching
@[https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#fact-caching]
- To benefit from cached facts you will set gather_facts to False in most plays.
- Ansible ships with two persistent cache plugins: redis and jsonfile.
- To configure fact caching using redis, enable it in ansible.cfg as follows:
[defaults]
gathering = smart
fact_caching = redis
fact_caching_timeout = 86400
AWX GUI
@[https://www.howtoforge.com/ansible-awx-guide-basic-usage-and-configuration/]
- AWX is an open source web application that provides a user interface, REST API,
and task engine for Ansible. It's the open source version of the Ansible Tower.
The AWX allows you to manage Ansible playbooks, inventories, and schedule jobs
to run using the web interface.
- How to Run and Schedule Ansible Playbook Using AWX GUI
@[https://www.linuxtechi.com/run-schedule-ansible-playbook-awx-gui/]
Puppet (Ansible Alternative)
Puppet 101
REF:
@[https://blogs.sequoiainc.com/puppet-101-part-1/]
@[https://blogs.sequoiainc.com/puppet-101-part-2/]
master/agent architecture:
- PuppetºMasterº: - server holding all the configuration.
- PuppetºAgent º: - Installed on each "target" server, ºAgent Certificateº: ← - signed Master's CA.
runs @ regular intervals: ───────────────── - Used for secure network
- Query desired state and if needed -ºnode-nameº communic between Master←→Agent
(configuration drift) update state. ^
|
Ex. web01.myDomain.com (wildcards allowed)
Assigning/managing node names Rºcan be trickyº
in the cloud since DNS change frequently.
┌─────────────────────────────────────────────────────────┬────────────────────────────────────────────────────────────┐
│OºRESOURCEº │ BºCLASSESº │
│(Concrete resource that must be present in server) │ - A Class is a group of Resources that │
│ ┌─── user/file/package/...that must be present in │ belong together conceptually, │
│ │ server (or custom resource) │ fulfilling a given instalation- │
│ v │ Ex: │ -requirement role. │
│OºTYPEº{ TITLE ← must unique │Oºuserº{ 'jbar': │ - variables can be defined to customize │
│ ATTRIBUTE, per Node │ ensure =˃ present, │ target environments. │
│ ATTRIBUTE, │ home =˃ '/home/jbar',│ (test,acceptance,pre,pro,..) │
│ ATTRIBUTE, │ shell =˃ '/bin/bash', │ - inheritance is allowed to save │
│ ... │ } │ duplicated definition │
│ } ^ ^ ^ │ │
│ | | | │ │ Ex: │
│ key =˃ value key value │ class BºCLASS_NAMEº { │ class Bºusersº { │
│ │ RESOURCE │ user { 'tomcat': │
│$ puppet resource Oºuserº │ RESOURCE │ ensure =˃ present, │
│ ^^^^^^^^^^^^^^^ │ } │ home =˃ '/home/jbauer',│
│ Returns all users │ │ shell =˃ '/bin/bash', │
│ (not just those configured/installed by Puppet) │ │ } │
│ (same behaviour applies to any other resource) │ │ user { 'nginx': │
│ │ │ ... │
│ │ │ } │
│ │ │ ... │
│ │ │ } │
│ │ │ include Bºusersº │
│ │ │ ^^^^^^^^^^^^^^^^ │
│ │ │ ºDon't forgetº. Otherwise class is │
│ │ ignored │
├─────────────────────────────────────────────────────────┼────────────────────────────────────────────────────────────┤
│QºNODE ("Server")º │ │
│- bundle of: [ class1, class2, .. , resource1, ...] │ QºNODEº 1 ←─────────→ NBºClassº │
│ │ 1 1 │
│ must match Agent-Certificate.name│ \ / │
│ SYNTAX │Ex: ┌───────┴────────┐ │ \ / │
│node Q"NAME" { │node Qº"web01.myDomain.com"º {│ N N │
│ include BºCLASS01º │ │ OºResourceº │
│ include BºCLASS02º │ include Bºtomcatº │ │
│ include Bº...º │ include Bºusersº │ │
│ include OºRESOURCE01º │ │ YºMANIFESTº: 0+QºNODEsº, 0+BºClassesº, 0+OºResourcesº │
│ include OºRESOURCE02º │ Oºfileº{ '/etc/app.conf' │ │
│ include Oº...º │ ... │ GºMODULEº: 1+Manifests, 0+supporting artifacts │
│} │ } │ ^ │
│ │} │ ($PUPPET/environments/$ENV/"module"/manifest/init.pp ) │
│ │ │
│The special name Qº"default"º will be applied to any │ ºSITE MANIFESTº: Separated Manifests forming the catalog │
│server (used for example to apply common security, │ (Read by the Puppet Agent) │
│packages,...) │ │
└─────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────┬─────────────────────────────────────────────────────────────┐
│YºMANIFESTSº: │GºMODULESº │
│*.pp file defining OºResourcesº, BºClassesº and QºNodesº │- reusable bundle of [ 1+ Manifests , "support file list" ] │
│ │- installed on Puppet Master │
│ ┌──────────────────────────────────────────────── │ (can also be installed from a central repository using │
│ │example.pp Manifest: │ $ puppet module ... ) │
│ │ // variable declarations, logic constructs, ... │- referenced by name in other Modules or in Manifests. │
│ │ │- Layout of │
│ │ │ ${PUPPET}/environments/${ENVIRONMENT}/modules/ºmodule01º │
│ │Oºuser{ 'jbauer':º │ name must │
│ │ ensure =˃ present, │ ºmodule01º ←───────────────── match │
│ │ home =˃ '/home/jbauer', │ ├─ manifests vvvvvvvv │
│ │ shell =˃ '/bin/bash', │ │ ├ºinit.ppº ←········ classºmodule01º{ │ │
│ │ } │ │ │ ... │ │
│ │ │ │ │ } │ │
│ │Bºclass 'security'º{ │ │ │ │ │
│ │ ... │ │ ├ class01.pp (opt)←· class class01 { │ │
│ │ } │ │ │ ... │
│ │ │ │ │ } │
│ │ include security │ │ └ ... ^ │
│ │ │ │ module01@init.pp can be used as │
│ │Bºclass 'tomcat'º{ │ ├─ files (opt) include module01 │
│ │ } │ ├─ templates (opt) class01@class01.pp can be used as │
│ │ │ ├─ lib (opt) include module01::class01 │
│ │Qºnodeº'web01.example.com' { │ ├─ facts.d (opt) Retrieve storage,CPU,...before ←─┐│
│ │ includeBºtomcatº │ │ exec. the catalog ││
│ │ ... │ │@[https://puppet.com/docs/puppet/latest/core_facts.html] ││
│ │ │ ├─ examples (opt) ││
│ │ } │ └─ spec (opt) ││
└─────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────┼┘
┌───────────────────────────────────────┘
┌───────────────────────────────────────────────────────────────────────────┐ Example custom "facter":
│YºSITE (MAIN) MANIFESTº │ $ cat ./modules/basic/facts/lib/facter/common.rb
│- area of Puppet configurationºseparated from Modulesº. │ → Facter.add("hostnamePart01") do
│- By default, all Manifests contained in │ → setcode do
│ º${PUPPET}/environments/${ENVIRONMENT}/manifestsº │ → h = Facter.value(:hostname)
│ (vs ${PUPPET}/environments/${ENVIRONMENT}/modules/mod1... │ → h_a = h.split("-")[0].tr("0-9", "").chomp
│ ${PUPPET}/environments/${ENVIRONMENT}/modules/mod2...) │ → end
│- Its content is concatenated and executed as the Site Manifest. │ → end
│-ºstarting pointºfor calculating the ºPUPPET catalogº , │ → ...
│ i.e., the "sum total of applicable configuration" for a node. │ →
│- This is the information queried by the Puppet Agent instaled on each │
│ "satellite" server. │
│ - any standalone Resource or Class declarations is automatically applied │
│ - matching Nodes (Node_name vs Agent Certificate Name) are also applied │
└───────────────────────────────────────────────────────────────────────────┘
ºADVANCED TOPICSº (TODO)
- controlling the Resources order execution
- transient cloud servers
- auto-signing and node name wildcards
- ...
Puppet Bolt
"Agentless" version of Puppet follwing Ansible approach.
It can be installed on a local workstation and connects
directly to remote targets with SSH or WinRM, so you are
not required to install any agent software.
Vagrant (VMs as code)
Bº###################º Bº# External Links #º Bº###################º - @[https://www.vagrantup.com/docs/index.html] - @[https://www.vagrantup.com/docs/cli/] CLI Reference - @[https://www.vagrantup.com/intro/getting-started/index.html] - @[https://www.vagrantup.com/docs/providers/] Providers list - @[https://app.vagrantup.com/boxes/search] ºBoxes Searchº - @[https://www.vagrantup.com/docs/networking/] Networking Bº#################º Bº# Vagrant Boxes #º Bº#################º - Pre-built VMs avoiding slow and tedious process. - Can be used as base image to (quickly) clone an existing virtual machine. - Specifying the box to use for your Vagrant environment is always the first step after creating a new Vagrantfile. Bº#################º Bº# Vagrant Share #º Bº#################º @[https://www.vagrantup.com/intro/getting-started/share.html] @[https://www.vagrantup.com/docs/share] • $º$ vagrant share º ← Quick how-to, share a Vagrant environment with anyone in the World. • three primary modes or features (not mutually exclusive, can be combined): ·ºshareable URLº pointing to Vagrant environment. BºURL "consumer" does not need Vagrant installed, so it can be shared º Bºwith anyone. Useful for testing webhooks, demos with clients, ... º ·ºinstant SSH accessº $º$vagrant connect --ssh º ← (local/remote) client (pair programming, debugging ops problems, etc....) · General sharing by exposing a tcp port and $º$ vagrant connect º ← (local/remote) client (pair programming, debugging ops problems, etc....) Bº################º Bº# Command List #º Bº###############º $º$ vagrant "COMMAND" -h º $ vagrant list-commands # Most frequently used commands FREQUENTLY USED OTHER COMMANDS box manages boxes: installation, removal, etc. | cap checks and executes capability destroy stops and deletes all traces of the vagrant machine | docker-exec attach to an already-running docker container global-status outputs status Vagrant environments for this user | docker-logs outputs the logs from the Docker container halt stops the vagrant machine | docker-run run a one-off command in the context of a container help shows the help for a subcommand | list-commands outputs all available Vagrant subcommands, even non-primary ones init initializes new environment (new Vagrantfile) | provider show provider for this environment login log in to HashiCorp's Vagrant Cloud | rsync syncs rsync synced folders to remote machine package packages a running vagrant environment into a box | rsync-auto syncs rsync synced folders automatically when files change plugin manages plugins: install, uninstall, update, etc. port displays information about guest port mappings powershell connects to machine via powershell remoting provision provisions the vagrant machine push deploys enviroment code → (configured) destination rdp connects to machine via RDP reload restart Vagrant VM, load new Vagrantfile config resume resume a suspended vagrant machine snapshot manages snapshots: saving, restoring, etc. ssh connects to machine via SSH ssh-config outputs OpenSSH connection config. status outputs status of the vagrant machine suspend suspends the machine up starts and provisions the vagrant environment validate validates the Vagrantfile version prints current and latest Vagrant version Bº################º Bº# QUick-How To #º Bº################º $º $ mkdir vagrant_getting_started º $º $ cd vagrant_getting_started º $º $ vagrant init º ← # creates new Vagrantfile. Bº####################################º Bº# Advanced Vagranfile Example #º Bº# 3 VM's Cluster using Virtual Box #º Bº####################################º ┌──────────────────────────────────────────────────────────────────────────┐ │ # -º- mode: ruby -º- │ │ # vi: set ft=ruby : │ │ │ │ VAGRANTFILE_API_VERSION = "2" │ │ │ │ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| │ │ # Use the same key for each machine │ │ config.ssh.insert_key = false │ │ │ │ config.vm.define "vagrant1" do |vagrant1| │ │ vagrant1.vm.box = "ubuntu/xenial64" │ │ vagrant1.vm.provider :virtualbox do |v| │ │ v.customize ["modifyvm", :id, "--memory", 1024] │ │ end │ │ vagrant1.vm.network "forwarded_port", guest: 80, host: 8080 │ │ vagrant1.vm.network "forwarded_port", guest: 443, host: 8443 │ │ vagrant1.vm.network "private_network", ip: "192.168.0.1" │ │ # Provision through custom bootstrap.sh script │ │ config.vm.provision :shell, path: "bootstrap.sh" │ │ end │ │ config.vm.define "vagrant2" do |vagrant2| │ │ vagrant2.vm.box = "ubuntu/xenial64" │ │ vagrant2.vm.provider :virtualbox do |v| │ │ v.customize ["modifyvm", :id, "--memory", 2048] │ │ end │ │ vagrant2.vm.network "forwarded_port", guest: 80, host: 8081 │ │ vagrant2.vm.network "forwarded_port", guest: 443, host: 8444 │ │ vagrant2.vm.network "private_network", ip: "192.168.0.2" │ │ end │ │ config.vm.define "vagrant3" do |vagrant3| │ │ vagrant3.vm.box = "ubuntu/xenial64" │ │ vagrant3.vm.provider :virtualbox do |v| │ │ v.customize ["modifyvm", :id, "--memory", 2048] │ │ end │ │ vagrant3.vm.network "forwarded_port", guest: 80, host: 8082 │ │ vagrant3.vm.network "forwarded_port", guest: 443, host: 8445 │ │ vagrant3.vm.network "private_network", ip: "192.168.0.3" │ │ end │ │ end │ └──────────────────────────────────────────────────────────────────────────┘
Yaml References
@[http://docs.ansible.com/ansible/YAMLSyntax.html]
YAML JSON
--- {
key1: val1 "key1": "val1",
key2: "key2": [
- "thing1" "thing1",
- "thing2" "thing2"
# I am a comment ]
}
Bº Anchors, references and extensions
---
key1:º⅋anchorº ← Defines {
K1: "One" the anchor "key1": {
K2: "Two" "K1": "One",
"K2": "Two"
key2:º*anchorº ← References/ },
uses the anch. "key2": {
key3: "K1": "One",
º˂˂: *anchorº ← Extends anch. "K2": "Two"
K2: "I Changed" }
K3: "Three" "key3": {
"K1": "One",
"K2": "I Changed",
"K3": "Three"
}
}
RºWARNº: Many NodeJS parsers break the extend.
BºExtend Inlineº
- take only SOME sub-keys from key1 to inject into key2
--- {
key1: "key1": {
˂˂:º⅋anchorº ← Inject into "K1": "One",
K1: "One" key1 and save "K2": "Two"
K2: "Two" as anchor },
"bar": {
bar: "K1": "One",
º˂˂: *anchorº "K3": "Three"
K3: "Three" }
}
BºBash Aliasº
(To be added to .bashrc | .profile | ...)
alias yaml2js="python -c 'import sys, yaml, json; \
json.dump(yaml.load(sys.stdin), sys.stdout, indent=4)'"
$º$ cat in.yaml | yaml2js º(json output)
RºWARN:º - Unfortunatelly there is no way to override or
extends lists to append new elements to existing ones,
only maps/dictionaries with the º˂˂ operatorº:
º˂˂º "inserts" values of referenced map into
current one being defined.
Nexus Repository Management
https://blog.sonatype.com/using-nexus-3-as-your-repository-part-1-maven-artifacts
https://blog.sonatype.com/using-nexus-3-as-your-repository-part-2-npm-packages
https://blog.sonatype.com/using-nexus-3-as-your-repository-part-3-docker-images
• See also: Artifactory by JFrog
Reproducible Builds
@[https://reproducible-builds.org/]
- Reproducible builds are a set of software development practices
that create an independently-verifiable path from source to binary
code.
Dockerfile
Container Standars
OCI Spec.
@[https://www.opencontainers.org/faq]
OCI mission: promote a set of common, minimal, open standards
and specifications around container technology
focused on creating formal specification for
container image formats and runtime
- values: (mostly adopted from the appc founding values)
- Composable: All tools for downloading, installing, and running containers should be well integrated, but independent and composable.
- Portable: runtime standard should be usable across different hardware,
operating systems, and cloud environments.
- Secure: Isolation should be pluggable, and the cryptographic primitives
for strong trust, image auditing and application identity should be solid.
- Decentralized: Discovery of container images should be simple and
facilitate a federated namespace and distributed retrieval.
- Open: format and runtime should be well-specified and developed by
a community.
- Code leads spec, rather than vice-versa.
- Minimalist: do a few things well, be minimal and stable, and
- Backward compatible:
- Docker donated both a draft specification and a runtime and code
associated with a reference implementation of that specification:
BºIt includes entire contents of the libcontainer project, includingº
Bº"nsinit" and all modifications needed to make it run independentlyº
Bºof Docker. . This codebase, called runc, can be found at º
Bºhttps://github.com/opencontainers/runc º
- the responsibilities of the Technical Oversight Board (TOB)
ca be followed at https://github.com/opencontainers/tob:
- Serving as a source of appeal if the project technical leadership
is not fulfilling its duties or is operating in a manner that is
clearly biased by the commercial concerns of the technical
leadership’s employers.
- Reviewing the tests established by the technical leadership for
adherence to specification
- Reviewing any policies or procedures established by the technical leadership.
- The OCI seeks rough consensus and running code first.
What is the OCI’s perspective on the difference between a standard and a specification?
The v1.0.0 2017-07-19.
- Adopted by:
- Cloud Foundry community by embedding runc via Garden
- Kubernetes is incubating a new Container Runtime Interface (CRI)
that adopts OCI components via implementations like CRI-O and rklet.
- rkt community is adopting OCI technology already and is planning
to leverage the reference OCI container runtime runc in 2017.
- Apache Mesos.
- AWS announced OCI image format in its Amazon EC2 Container Registry (ECR).
- Will the runtime and image format specs support multiple platforms?
- How does OCI integrate with CNCF?
A container runtime is just one component of the cloud native
technical architecture but the container runtime itself is out of
initial scope of CNCF (as a CNCF project), see the charter Schedule A
for more information.
Cont.Ecosystem • Image creation: Alt 1) Dockerfile → docker build Alt 2) buildah: Similar to docker build, it also allow to add image-lyaer manually from the host command line. (removing the need for a Dockerfile). (RedHat rootless 'podman' is based on buildah) Alt 3) Java source code → jib → OCI image Alt 4) Google Kaniko ... • Runtimes: @[#runtimes_summary] Alt 1) runC (OOSS, Go-based, maintained by docker and others) Alt 2) Crun (OOSS, C-based, faster than runC ) Alt 2) containerd (OOSS, maintained by IBM and others) Alt 3) CRI-O: very lightweight alterantive for k8s Alt 3) rklet • Registries and Repositories repository: "storage" for OCI binary images. registry: index of 1+ repositories (ussually its own repo) • Container Orchestration == Kubernetes
runc @[https://github.com/opencontainers/runc] - Reference runtime and cli tool donated by Docker for spawning and running containers according to the OCI specification: @[https://www.opencontainers.org/] - Based on Go. -BºIt reads a runtime specification and configures the Linux kernel.º - Eventually it creates and starts container processes. RºGo might not have been the best programming language for this taskº. Rºsince it does not have good support for the fork/exec model of computing.º Rº- Go's threading model expects programs to fork a second process º Rº and then to exec immediately. º Rº- However, an OCI container runtime is expected to fork off the first º Rº process in the container. It may then do some additional º Rº configuration, including potentially executing hook programs, beforeº Rº exec-ing the container process. The runc developers have added a lotº Rº of clever hacks to make this work but are still constrained by Go's º Rº limitations. º Bºcrun, C based, solved those problems.º - reference implementation of the OCI runtime specification.
crun @[https://github.com/containers/crun/issues] @[https://www.redhat.com/sysadmin/introduction-crun] - fast, low-memory footprint container runtime by Giuseppe Scrivanoby (RedHat). - C based: Unlike Go, C is not multi-threaded by default, and was built and designed around the fork/exec model. It could handle the fork/exec OCI runtime requirements in a much cleaner fashion than 'runc'. C also interacts very well with the Linux kernel. It is also lightweight, with much smaller sizes and memory than runc(Go): compiled with -Os, 'crun' binary is ~300k (vs ~15M 'runc') "" We have experimented running a container with just Bº250K limit setº."" Bºor 50 times smaller.º and up to Bºtwice as fast. - cgroups v2 ("==" Upstream kernel, Fedora 31+) compliant from the scratch while runc -Docker/K8s/...- Rºgets "stuck" into cgroups v1.º (experimental support in 'runc' for v2 as of v1.0.0-rc91, thanks to Kolyshkin and Akihiro Suda). - feature-compatible with "runc" with extra experimental features. - Given the same Podman CLI/k8s YAML we get the same containers "almost always" since Bºthe OCI runtime's job is to instrument the kernel toº Bºcontrol how PID 1 of the container runs.º BºIt is up to higher-level tools like conmon or the container engine toº Bºmonitor the container.º - Sometimes users want to limit number of PIDs in containers to just one. With 'runc' PIDs limit can not be set too low, because the Go runtime spawns several threads. 'crun', written in C, does not have that problem. Ex: $º$ RUNC="/usr/bin/runc" , CRUN="/usr/bin/crun" º $º$ podman --runtime $RUNC run --rm --pids-limit 5 fedora echo it works º └────────────┘ →RºError: container create failed (no logs from conmon): EOFº $º$ podman --runtime $CRUN run --rm --pids-limit 1 fedora echo it works º └────────────┘ →Bºit worksº - OCI hooks supported, allowing the execution of specific programs at different stages of the container's lifecycle. - runc/crun comparative: $º$ CMD_RUNC="for i in {1..100}; do runc run foo ˂ /dev/null; done"º $º$ CMD_CRUN="for i in {1..100}; do crun run foo ˂ /dev/null; done"º $º$ time -v sh -c "$CMD_RUNC" º → User time (seconds): 2.16 → System time (seconds): 4.60 → Elapsed (wall clock) time (h:mm:ss or m:ss): 0:06.89 → Maximum resident set size (kbytes): 15120 → ... $º$ time -v sh -c "$CMD_CRUN" º → ... → User time (seconds): 0.53 → System time (seconds): 1.87 → Elapsed (wall clock) time (h:mm:ss or m:ss): 0:03.86 → Maximum resident set size (kbytes): 3752 → ... - Experimental features: - redirecting hooks STDOUT/STDERR via annotations. - Controlling stdout and stderr of OCI hooks Debugging hooks can be quite tricky because, by default, it's not possible to get the hook's stdout and stderr. - Getting the error or debug messages may require some yoga. - common trick: log to syslog to access hook-logs via journalctl. (Not always possible) - With 'crun' + 'Podman': $º$ podman run --annotation run.oci.hooks.stdout=/tmp/hook.stdoutº └───────────────────────────────────┘ executed hooks will write: STDOUT → /tmp/hook.stdout STDERR → /tmp/hook.stderr Bº(proposed fo OCI runtime spec)º - crun supports running older versions of systemd on cgroup v2 using --annotation run.oci.systemd.force_cgroup_v1, This forces a cgroup v1 mount inside the container for the name=systemd hierarchy, which is enough for systemd to work. Useful to run older container images, such as RHEL7, on a cgroup v2-enabled system. Ej: $º$ podman run --annotation run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup \ º $º centos:7 /usr/lib/systemd/systemd º - Crun as a library: "We are considering to integrate it with Bºconmon, the container monitor used byº BºPodman and CRI-O, rather than executing an OCI runtime."º - 'crun' Extensibility: """... easily to use all the kernel features, including syscalls not enabled in Go.""" -Ex: openat2 syscall protects against link path attacks (already supported by crun). - 'crun' is more portable: Ex: Risc-V.
Container Network Iface (CNI) @[https://github.com/containernetworking/cni] - specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. - CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. - CNI Spec - CNI concerns itself only with network connectivity of containers and removing allocated resources when container are deleted. - specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins: - libcni, a CNI runtime implementation - skel, a reference plugin implementation github.com/cotainernetworking/cni - Set of reference and example plugins: - Inteface plugins: ptp, bridge,macvlan,... - "Chained" plugins: portmap, bandwithd, tuning, github.com/cotainernetworking/pluginds NOTE: Plugins are executable programs with STDIN/STDOUT ┌ Network ┌─────→(STDIN) │ Runtime → ADD JSON CNI ···───┤ ^ ^^^ executable│ │ ADD plugin └ Container(or Pod) │ DEL └─┬──┘ Interface │ CHECK v │ VERSION (STDOUT) │ └────┬──────┘ │ │ └──── JSON result ─────┘ ºRuntimesº º3rd party pluginsº K8s, Mesos, podman, Calico ,Weave, Cilium, CRI-O, AWS ECS, ... ECS CNI, Bonding CNI,... - The idea of CNI is to provide common interface between the runtime and the CNI (executable) plugins through standarised JSON messages. Example cli Tool executing CNI config: @[https://github.com/containernetworking/cni/tree/master/cnitool] INPUT_JSON { "cniVersion":"0.4.0", ← Standard attribute "name":Bº"myptp"º, "type":"ptp", "ipMasq":true, "ipam": { ← Plugin specific attribute "type":"host-local", "subnet":"172.16.29.0/24", "routes":[{"dst":"0.0.0.0/0"}] } } $ echo $INPUT_JSON | \ ← Create network config sudo tee /etc/cni/net.d/10-myptp.conf it can be stored on file-system or runtime artifacts (k8s etcd,...) $ sudo ip netns add testing ← Create network namespace. └-----┘ $ sudo CNI_PATH=./bin \ ← Add container to network cnitool add Bºmyptpº \ /var/run/netns/testing $ sudo CNI_PATH=./bin \ ← Check config cnitool check myptp \ /var/run/netns/testing $ sudo ip -n testing addr ← Test $ sudo ip netns exec testing \ ping -c 1 4.2.2.2 $ sudo CNI_PATH=./bin \ ← Clean up cnitool del myptp \ /var/run/netns/testing $ sudo ip netns del testing BºMaintainers (2020):º - Bruce Ma (Alibaba) - Bryan Boreham (Weaveworks) - Casey Callendrello (IBM Red Hat) - Dan Williams (IBM Red Hat) - Gabe Rosenhouse (Pivotal) - Matt Dupre (Tigera) - Piotr Skamruk (CodiLime) - "CONTRIBUTORS" BºChat channelsº - https.//slack.cncf.io - topic #cni
Portainer UI
(See also LazyDocker)
• Portainer, an open-source management interface used to manage a
Docker host, Swarm and k8s cluster.
• It's used by software engineers and DevOps teams to simplify and
speed up software deployments.
Available on LINUX, WINDOWS ⅋ OSX
$ docker container run -d \
-p 9000:9000 \
-v /var/run/docker.sock:/var/run/docker.sock portainer/portainer
External Links - @[https://docs.docker.com/] - @[https://github.com/jdeiviz/docker-training] D.Peman@github - @[https://github.com/jpetazzo/container.training] container.training@Github - @[http://container.training/] Docker API - @[https://docs.docker.com/engine/api/]) - @[https://godoc.org/github.com/docker/docker/api] - @[https://godoc.org/github.com/docker/docker/api/types]
DockerD summary
- dockerD can listen for Engine API requests via:
- IPC socket: default /var/run/docker.sock
- tcp : WARN: default setup un-encrypted/un-authenticated
- fd : Systemd based systems only.
dockerd -H fd://.
Bº################################º
Bº# Daemon configuration Options #º
Bº################################º
@[https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file]
└ - In the Official-docker-install options must be set in file:
º/lib/systemd/system/docker.serviceº, adding to the ExecStart= line.
- After editing the file, systemd must reload the service:
$º$ sudo systemctl stop docker.serviceº
$º$ sudo systemctl daemon-reload º
$º$ sudo systemctl start docker.serviceº
- Options include:
--config-file string default "/etc/docker/daemon.json"
-D, --debug Enable debug mode
--experimental Enable experimental features
--icc Enable inter-container communication (default true)
--log-driver string default "json-file"
-l, --log-level string default "info"
--mtu int Set the containers network MTU
--network-control-plane-mtu int Network Control plane MTU (default 1500)
--rootless Enable rootless mode; typically used with RootlessKit (experimental)
Bº################º
Bº# HOST STORAGE #º [storage.host]
Bº################º
Oº--data-root def:"/var/lib/docker"º ← NOTE: Softlinks supported for:
Oº--exec-root def:"/var/run/docker"º - (default) /var/lib/docker data directory
- (default) /var/lib/docker/tmp temp.directory
--storage-driver def: overlay2
--storage-opt "..."
ºENVIRONMENT VARIABLESº
DOCKER_DRIVER The graph driver to use.
DOCKER_RAMDISK If set this will disable "pivot_root".
BºDOCKER_TMPDIR Location for temporary Docker files.º
MOBY_DISABLE_PIGZ Do not use unpigz to decompress layers in parallel
when pulling images, even if it is installed.
DOCKER_NOWARN_KERNEL_VERSION Prevent warnings that your Linux kernel is
unsuitable for Docker.
Bº########################º
Bº# Daemon storage-driver#º
Bº########################º
See also: @[https://docs.docker.com/storage/storagedriver/]
Docker daemon support next storage drivers:
└ aufs :Rºoldest (linux kernel patch unlikely to be merged)º
· BºIt allows containers to share executable and shared library memory, º
· Bº→ useful choice when running thousands of repeated containersº
└ devicemapper:
· thin provisioning and Copy on Write (CoW) snapshots.
· - For each devicemapper graph location - /var/lib/docker/devicemapper -
· a thin pool is created based on two block devices:
· - data : loopback mount of automatically created sparse file
· - metadata: loopback mount of automatically created sparse file
·
└ btrfs :
· -Bºvery fastº
· -Rºdoes not share executable memory between devicesº
· -$º# dockerd -s btrfs -g /mnt/btrfs_partition º
·
└ zfs :
· -Rºnot as fast as btrfsº
· -Bºlonger track record on stabilityº.
· -BºSingle Copy ARC shared blocks between clones allowsº
· Bºto cache just onceº
· -$º# dockerd -s zfsº ← select a different zfs filesystem by setting
· set zfs.fsname option
·
└ overlay :
· -Bºvery fast union filesystemº.
· -Bºmerged in the main Linux kernel 3.18+º
· -Bºsupport for page cache sharingº
· (multiple containers accessing the same file
· can share a single page cache entry/ies)
· -$º# dockerd -s overlay º
· -RºIt can cause excessive inode consumptionº
·
└ overlay2 :
-Bºsame fast union filesystem of overlayº
-BºIt takes advantage of additional features in Linux kernel 4.0+
Bºto avoid excessive inode consumption.º
-$º#Call dockerd -s overlay2 º
-Rºshould only be used over ext4 partitions (vs Copy on Write FS like btrfs)º
@[https://www.infoq.com/news/2015/02/under-hood-containers]
└ Vfs: a no thrills, no magic, storage driver, and one of the few
· that can run Docker in Docker.
└ Aufs: fast, memory hungry, not upstreamed driver, which is only
· present in the Ubuntu Kernel. If the system has the aufs utilities
· installed, Docker would use it. It eats a lot of memory in cases
· where there are a lot of start/stop container events, and has issues
· in some edge cases, which may be difficult to debug.
·
└ "... Diffs are a big performance area because the storage driver needs to
calculate differences between the layers, and it is particular to
each driver. Btrfs is fast because it does some of the diff
operations natively..."
- The Docker portable image format is composed of tar archives that
are largely for transit:
- Committing container to image with commit.
- Docker push and save.
- Docker build to add context to existing image.
- When creating an image, Docker will diff each layer and create a
tar archive of just the differences. When pulling, it will expand the
tar in the filesystem. If you pull and push again, the tarball will
change, because it went through a mutation process, permissions, file
attributes or timestamps may have changed.
- Signing images is very challenging, because, despite images being
mounted as read only, the image layer is reassembled every time. Can
be done externally with docker save to create a tarball and using gpg
to sign the archive.
Bº####################################º
Bº# Docker runtime execution options #º
Bº####################################º
└ The daemon relies on a OCI compliant runtime (invoked via the
containerd daemon) as its interface to the Linux kernel namespaces,
cgroups, and SELinux. More info at:
- @[/DevOps/linux_administration_summary.html?id=selinux_summary].
└ By default,Bºdockerd automatically starts containerdº.
- to control/tune containerd startup, manually start
containerd and pass the path to the containerd socket
using the --containerd flag. For example:
$º# dockerd --containerd /var/run/dev/docker-containerd.sockº
Bº#######################º
Bº# Insecure registries #º [image.registry]
Bº#######################º
└ Docker considers a private registry either:
- secure
- It uses TLS.
- CA cert exists in /etc/docker/certs.d/myregistry:5000/ca.crt.
- insecure
- not TLS used or/and
- CA-certificate unknown.
-º--insecure-registry myRegistry:5000º needs to modify docker daemon
config file. The config path can vary depending on the system.
It can be similar to next one in a SystemD enabled OS:
º/etc/systemd/system/docker.service.d/docker-options.confº
[Service]
Environment="DOCKER_OPTS= --iptables=false \
--data-root=/var/lib/docker \
--log-opt max-size=50m --log-opt max-file=5 \
--insecure-registry nexus.mycompany.com:10114
Bº#################################º
Bº# Daemon user namespace options #º
Bº#################################º
- The Linux kernel user namespace support provides additional security
by enabling a process, and therefore a container, to have a unique
range of user and group IDs which are outside the traditional user
and group range utilized by the host system. Potentially the most
important security improvement is that, by default, container
☞Bºprocesses running as the root user will have expected administrativeº
Bºprivilege (with some restrictions) inside the container but willº
Bºeffectively be mapped to an unprivileged uid on the host.º
More info at:
@[https://docs.docker.com/engine/security/userns-remap/]
$ docker help
Usage: docker COMMAND
A self-sufficient runtime for containers
Options:
--config string Location of client config files (default "/root/.docker")
-D, --debug Enable debug mode
-H, --host list Daemon socket(s) to connect to
-l, --log-level string Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
--tls Use TLS; implied by --tlsverify
--tlscacert string Trust certs signed only by this CA (default "/root/.docker/ca.pem")
--tlscert string Path to TLS certificate file (default "/root/.docker/cert.pem")
--tlskey string Path to TLS key file (default "/root/.docker/key.pem")
--tlsverify Use TLS and verify the remote
-v, --version Print version information and quit
Management Commands: | Commands:
Manage ... | attach Attach local STDIN/OUT/ERR streams to a running container
config Docker configs | build Build an image from a Dockerfile
container containers | commit Create a new image from a container's changes
image images | cp Copy files/folders between a container and the local filesystem
network networks | create Create a new container
node Swarm nodes | diff Inspect changes to files or directories on a container's filesystem
plugin plugins | events Get real time events from the server
secret Docker secrets | exec Run a command in a running container
service services | export Export a container's filesystem as a tar archive
swarm Swarm | history Show the history of an image
system Docker | images List images
trust trust on | import Import the contents from a tarball to create a filesystem image
Docker images | info Display system-wide information
volume volumes | inspect Return low-level information on Docker objects
| kill Kill one or more running containers
| load Load an image from a tar archive or STDIN
| login Log in to a Docker registry
| logout Log out from a Docker registry
| logs Fetch the logs of a container
| pause Pause all processes within one or more containers
| port List port mappings or a specific mapping for the container
| ps List containers
| pull Pull an image or a repository from a registry
| push Push an image or a repository to a registry
| rename Rename a container
| restart Restart one or more containers
| rm Remove one or more containers
| rmi Remove one or more images
| run Run a command in a new container
| save Save one or more images to a tar archive (streamed to STDOUT by default)
| search Search the Docker Hub for images
| start Start one or more stopped containers
| stats Display a live stream of container(s) resource usage statistics
| ("top" summary for all existing containers)
| stop Stop one or more running containers
| tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
| top Display the running processes of a container
| RºWARN:º"docker stats" is really what most people want
| when searching for a tool similar to UNIX "top".
| unpause Unpause all processes within one or more containers
| update Update configuration of one or more containers
| version Show the Docker version information
| wait Block until one or more containers stop, then print their exit codes
Install ⅋ setup
Proxy settings To configure Docker to work with an HTTP or HTTPS proxy server, follow instructions for your OS: Windows - Get Started with Docker for Windows macOS - Get Started with Docker for Mac Linux - Control⅋config. Docker with Systemd
docker global info system setup running/paused/stopped cont. $ sudo docker info Containers: 23 Running: 10 Paused: 0 Stopped: 1 Images: 36 Server Version: 17.03.2-ce ºStorage Driver: devicemapperº Pool Name: docker-8:0-128954-pool Pool Blocksize: 65.54 kB Base Device Size: 10.74 GB Backing Filesystem: ext4 Data file: /dev/loop0 Metadata file: /dev/loop1 ºData Space Used: 3.014 GBº ºData Space Total: 107.4 GBº ºData Space Available: 16.11 GBº ºMetadata Space Used: 4.289 MBº ºMetadata Space Total: 2.147 GBº ºMetadata Space Available: 2.143 GBº ºThin Pool Minimum Free Space: 10.74 GBº Udev Sync Supported: true Deferred Removal Enabled: false Deferred Deletion Enabled: false Deferred Deleted Device Count: 0 ºData loop file: /var/lib/docker/devicemapper/devicemapper/dataº ºMetadata loop file: /var/lib/docker/devicemapper/devicemapper/metadataº Library Version: 1.02.137 (2016-11-30) ºLogging Driver: json-fileº ºCgroup Driver: cgroupfsº Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe init version: 949e6fa ºSecurity Options:º º seccompº º Profile: defaultº Kernel Version: 4.17.17-x86_64-linode116 Operating System: Debian GNU/Linux 9 (stretch) OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 3.838 GiB Name: 24x7 ID: ZGYA:L4MN:CDCP:DANS:IEHQ:XYLD:C5KG:SUL4:3XLQ:ZO6M:3RSY:V6VB ºDocker Root Dir: /var/lib/dockerº ºDebug Mode (client): falseº ºDebug Mode (server): falseº *Registry: https://index.docker.io/v1/* Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false
/var/run/docker.sock @[https://medium.com/better-programming/about-var-run-docker-sock-3bfd276e12fd] - Unix socket the Docker daemon listens on by default, used to communicate with the daemon from within a container. - Can be mounted on containers to allow them to control Docker: $ docker runº-v /var/run/docker.sock:/var/run/docker.sockº .... USSAGE EXAMPLE: # STEP 1. Create new container $ curl -XPOSTº--unix-socket /var/run/docker.sockº \ -d '{"Image":"nginx"}' \ -H 'Content-Type: application/json' \ http://localhost/containers/create Returns something similar to: → {"Id":"fcb65c6147efb862d5ea3a2ef20e793c52f0fafa3eb04e4292cb4784c5777d65","Warnings":null} # STEP 2. Use /containers//start to start the newly created container. $ curl -XPOSTº--unix-socket /var/run/docker.sockº \ http://localhost/containers/fcb6...7d65/start # STEP 3: Verify it's running: $ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES fcb65c6147ef nginx “nginx -g ‘daemon …” 5 minutes ago Up 5 seconds 80/tcp, 443/tcp ecstatic_kirch ... ºStreaming events from the Docker daemonº - Docker API also exposes the*/events endpoint* $ curlº--unix-socket /var/run/docker.sockº http://localhost/events ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ command hangs on, waiting for new events from the daemon. Each new event will then be streamed from the daemon.
Docker Networks
Create new network and use it in containers:
$ docker ºnetwork createº OºredisNetworkº
$ docker run --rm --name redis-server --network OºredisNetworkº -d redis
$ docker run --rm --network OºredisNetworkº -it redis redis-cli -h redis-server -p 6379
List networks:
$ docker network ls
Disconect and connect a container to the network:
$ docker disconnect OºredisNetworkº redis-server
$ docker connect --alias db OºredisNetworkº redis-server
- TODO:
@[https://github.com/tldr-pages/tldr/blob/master/pages/common/kompose.md]
A tool to convert docker-compose applications to Kubernetes. More
information: https://github.com/kubernetes/kompose
Volumes
REUSE VOLUME FROM CONTAINER:
STEP 0: Create new container with volume
host-mach $ docker run -it Oº--name alphaº º-v "hostPath":/var/logº ubuntu bash
container $ date > /var/log/now
STEP 1: Create new container using volume from previous container:
host-mach $ docker run --volumes-from Oºalphaº ubuntu
container $ cat /var/log/now
CREAR VOLUME FOR REUSE IN DIFFERENT CONTAINERS
STEP 0: Create Volume
host-mach $ docker volume create --name=OºwebsiteVolumeº
STEP 1: Use volume in new container
host-mach $ docker run -d -p 8888:80 \
-v OºwebsiteVolumeº:/usr/share/nginx/html
-v logs:/var/log/nginx nginx
host-mach $ docker run
-v OºwebsiteVolumeº:/website
-w /website \
-it alpine vi index.html
Ex.: Update redis version without loosing data:
host-mach $ docker network create dbNetwork
host-mach $ docker run -d --network dbNetwork \
--network-alias redis \
--name redis28 redis:2.8
host-mach $ docker run -it --network dbNetwork \
alpine telnet redis 6379
→ SET counter 42
→ INFO server
→ SAVE
→ QUIT
host-mach $ docker stop redis28
host-mach $ docker run -d --network dbNetwork \
--network-alias redis \
--name redis30 \
--volumes-from redis28 \
redis:3.0
host-mach $ docker run -it --network dbNetwork \
alpine telnet redis 6379
→ GET counter
→ INFO server
→ QUIT
docker-compose - YAML file defining services, networks and volumes. Full ref: @[https://docs.docker.com/compose/compose-file/] Best Patterns: @[https://docs.docker.com/compose/production/] BºExample 1º C⅋P from https://github.com/bcgov/moh-prime/blob/develop/docker-compose.yml version: "3" services: ######################################################### Database # postgres: restart: always container_name: primedb Bºimage: postgres:10.6º # ← use pre-built image environment: POSTGRES_PASSWORD: postgres ... ports: - "5432:5432" volumes: - local_postgres_data:/var/lib/postgresql/data Oºnetworks:º # ← Networks to connect to Oº - primenetº ########################################################## MongoDB # mongo: restart: always container_name: primemongodb image: mongo:3 environment: MONGO_INITDB_ROOT_USERNAME: root ... ports: - 8081:8081 volumes: - local_mongodb_data:/var/lib/mongodb/data Oºnetworks:º Oº - primenetº ############################################################## API # dotnet-webapi: container_name: primeapi restart: always ºbuild:º # ← use Dockerfile to build image context: prime-dotnet-webapi/ RºWARNº: remember to rebuild image and recreate app’s containers like: ┌───────────────────────────────────────────────┐ │ $ docker-compose build dotnet-webapi │ │ │ │ $ docker-compose up \ ← stop,destroy,recreate │ │ --no-deps ← prevents from also │ │ -d dotnet-webapi recreating any service│ │ primeapi depends on. │ └───────────────────────────────────────────────┘ command: "..." environment: ... Oºports: º ← Exposed ports outside private "primenet" network Oº - "5000:8080" º ← Map internal port (right) to "external" port Oº - "5001:5001" º Oºexpose:º ← Expose ports without publishing to host machine Oº - "5001"º (only accessible to linked services). Use internal port. Oºnetworks:º Oº - primenetº depends_on: - postgres ##################################################### Web Frontend # nginx-angular: build: context: prime-angular-frontend/ ... ################################################ Local SMTP Server # mailhog: container_name: mailhog restart: always image: mailhog/mailhog:latest ports: - 25:1025 - 1025:1025 - 8025:8025 # visit localhost:8025 to see the list of captured emails ... ########################################################### Backup # backup: ... restart: on-failure volumes: Oº- db_backup_data:/opt/backupº ... volumes: local_postgres_data: local_mongodb_data: db_backup_data: Oºnetworks:º primenet: driver: bridge BºExample 2º --- version: '3.6' x-besu-bootnode-def: ⅋besu-bootnode-def restart: "on-failure" image: hyperledger/besu:${BESU_VERSION:-latest} environment: - LOG4J_CONFIGURATION_FILE=/config/log-config.xml entrypoint: - /bin/bash - -c - | /opt/besu/bin/besu public-key export --to=/tmp/bootnode_pubkey; /opt/besu/bin/besu \ --config-file=/config/config.toml \ --p2p-host=$$(hostname -i) \ --genesis-file=/config/genesis.json \ --node-private-key-file=/opt/besu/keys/key \ --min-gas-price=0 \ --rpc-http-api=EEA,WEB3,ETH,NET,PERM,${BESU_CONS_API:-IBFT} \ --rpc-ws-api=EEA,WEB3,ETH,NET,PERM,${BESU_CONS_API:-IBFT} ; x-besu-def: ⅋besu-def restart: "on-failure" image: hyperledger/besu:${BESU_VERSION:-latest} environment: - LOG4J_CONFIGURATION_FILE=/config/log-config.xml entrypoint: - /bin/bash - -c - | while [ ! -f "/opt/besu/public-keys/bootnode_pubkey" ]; do sleep 5; done ; /opt/besu/bin/besu \ --config-file=/config/config.toml \ --p2p-host=$$(hostname -i) \ --genesis-file=/config/genesis.json \ --node-private-key-file=/opt/besu/keys/key \ --min-gas-price=0 \ --rpc-http-api=EEA,WEB3,ETH,NET,PERM,${BESU_CONS_API:-IBFT} \ --rpc-ws-api=EEA,WEB3,ETH,NET,PERM,${BESU_CONS_API:-IBFT} ; x-ethsignerProxy-def: ⅋ethsignerProxy-def image: consensys/quorum-ethsigner:${QUORUM_ETHSIGNER_VERSION:-latest} command: [ "--chain-id=2018", "--http-listen-host=0.0.0.0", "--downstream-http-port=8545", "--downstream-http-host=rpcnode", "file-based-signer", "-k", "/opt/ethsigner/keyfile", "-p", "/opt/ethsigner/passwordfile" ] ports: - 8545 services: validator1: ˂˂ : *besu-bootnode-def volumes: - public-keys:/tmp/ - ./config/besu/config.toml:/config/config.toml - ./config/besu/permissions_config.toml:/config/permissions_config.toml - ./config/besu/log-config.xml:/config/log-config.xml - ./logs/besu:/var/log/ - ./config/besu/${BESU_CONS_ALGO:-ibft2}Genesis.json:/config/genesis.json - ./config/besu/networkFiles/validator1/keys:/opt/besu/keys networks: quorum-dev-quickstart: ipv4_address: 172.16.239.11 validator2: ˂˂ : *besu-def volumes: - public-keys:/opt/besu/public-keys/ - ./config/besu/config.toml:/config/config.toml - ./config/besu/permissions_config.toml:/config/permissions_config.toml - ./config/besu/log-config.xml:/config/log-config.xml - ./logs/besu:/var/log/ - ./config/besu/${BESU_CONS_ALGO:-ibft2}Genesis.json:/config/genesis.json - ./config/besu/networkFiles/validator2/keys:/opt/besu/keys depends_on: - validator1 networks: quorum-dev-quickstart: ipv4_address: 172.16.239.12 validator3: ˂˂ : *besu-def volumes: - public-keys:/opt/besu/public-keys/ - ./config/besu/config.toml:/config/config.toml - ./config/besu/permissions_config.toml:/config/permissions_config.toml - ./config/besu/log-config.xml:/config/log-config.xml - ./logs/besu:/var/log/ - ./config/besu/${BESU_CONS_ALGO:-ibft2}Genesis.json:/config/genesis.json - ./config/besu/networkFiles/validator3/keys:/opt/besu/keys depends_on: - validator1 networks: quorum-dev-quickstart: ipv4_address: 172.16.239.13 validator4: ˂˂ : *besu-def volumes: - public-keys:/opt/besu/public-keys/ - ./config/besu/config.toml:/config/config.toml - ./config/besu/permissions_config.toml:/config/permissions_config.toml - ./config/besu/log-config.xml:/config/log-config.xml - ./logs/besu:/var/log/ - ./config/besu/${BESU_CONS_ALGO:-ibft2}Genesis.json:/config/genesis.json - ./config/besu/networkFiles/validator4/keys:/opt/besu/keys depends_on: - validator1 networks: quorum-dev-quickstart: ipv4_address: 172.16.239.14 rpcnode: ˂˂ : *besu-def volumes: - public-keys:/opt/besu/public-keys/ - ./config/besu/config.toml:/config/config.toml - ./config/besu/permissions_config.toml:/config/permissions_config.toml - ./config/besu/log-config.xml:/config/log-config.xml - ./logs/besu:/var/log/ - ./config/besu/${BESU_CONS_ALGO:-ibft2}Genesis.json:/config/genesis.json - ./config/besu/networkFiles/rpcnode/keys:/opt/besu/keys depends_on: - validator1 ports: - 8545:8545/tcp - 8546:8546/tcp networks: quorum-dev-quickstart: ipv4_address: 172.16.239.15 ethsignerProxy: ˂˂ : *ethsignerProxy-def volumes: - ./config/ethsigner/password:/opt/ethsigner/passwordfile - ./config/ethsigner/key:/opt/ethsigner/keyfile depends_on: - validator1 - rpcnode ports: - 18545:8545/tcp networks: quorum-dev-quickstart: ipv4_address: 172.16.239.40 explorer: build: block-explorer-light/. image: quorum-dev-quickstart/block-explorer-light:develop depends_on: - rpcnode ports: - 25000:80/tcp networks: quorum-dev-quickstart: ipv4_address: 172.16.239.31 prometheus: image: "prom/prometheus" volumes: - ./config/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml - prometheus:/prometheus command: - --config.file=/etc/prometheus/prometheus.yml ports: - 9090:9090/tcp networks: quorum-dev-quickstart: ipv4_address: 172.16.239.32 grafana: image: "grafana/grafana" environment: - GF_AUTH_ANONYMOUS_ENABLED=true volumes: - ./config/grafana/provisioning/:/etc/grafana/provisioning/ - grafana:/var/lib/grafana ports: - 3000:3000/tcp networks: quorum-dev-quickstart: ipv4_address: 172.16.239.33 volumes: public-keys: prometheus: grafana: Oºnetworks: º Oº quorum-dev-quickstart: º Oº driver: bridge º Oº ipam: º Oº config: º Oº - subnet: 172.16.239.0/24 º
SystemD Integration REF: https://gist.github.com/Luzifer/7c54c8b0b61da450d10258f0abd3c917 - /etc/compose/docker-compose.yml - /etc/systemd/system/docker-compose.service (Service unit to start and manage docker compose) [Unit] Description=Docker Compose container starter After=docker.service network-online.target Requires=docker.service network-online.target [Service] WorkingDirectory=/etc/compose Type=oneshot RemainAfterExit=yes ExecStartPre=-/usr/local/bin/docker-compose pull --quiet ExecStart=/usr/local/bin/docker-compose up -d ExecStop=/usr/local/bin/docker-compose down ExecReload=/usr/local/bin/docker-compose pull --quiet ExecReload=/usr/local/bin/docker-compose up -d [Install] WantedBy=multi-user.target - /etc/systemd/system/docker-compose-reload.service (Executing unit to trigger reload on docker-compose.service) [Unit] Description=Refresh images and update containers [Service] Type=oneshot ExecStart=/bin/systemctl reload-or-restart docker-compose.service - /etc/systemd/system/docker-compose-reload.timer (Timer unit to plan the reloads) [Unit] Description=Refresh images and update containers Requires=docker-compose.service After=docker-compose.service [Timer] OnCalendar=*:0/15 [Install] WantedBy=timers.target
Registry ("Image repository")
@[https://docs.docker.com/registry/#what-it-is]
@[https://docs.docker.com/registry/introduction/]
BºSummaryº
$º$ docker run -d -p 5000:5000 \ º ← Start registry
$º --restart=always º
$º --name registry registry:2 º
$º$ docker pull ubuntu º ← Pull (example) image
$º$ docker image tag ubuntu \ º ← Tag the image to "point"
$º localhost:5000/myfirstimage º to local registry
$º$ docker push \ º ← Push to local registry
$º localhost:5000/myfirstimage º
$º$ docker pull \ º ← final Check
$º localhost:5000/myfirstimage º
NOTE: clean setup testing like:
$º$ docker container stop registry º
$º$ docker container rm -v registry º
Dockerize
@[https://github.com/jwilder/dockerize]
- utility to simplify running applications in docker containers.
BºIt allows you to:º
Bº- generate app config. files at container startup timeº
Bº from templates and container environment variablesº
Bº- Tail multiple log files to stdout and/or stderrº
Bº- Wait for other services to be available using TCP, HTTP(S),º
Bº unix before starting the main process.º
typical use case:
- application that has one or more configuration files and
you would like to control some of the values using environment variables.
- dockerize allows to set an environment variable and update the config file before
starting the contenerized application
- other use case: forward logs from harcoded files on the filesystem to stdout/stderr
(Ex: nginx logs to /var/log/nginx/access.log and /var/log/nginx/error.log by default)
Managing Containers
Boot-up/run container: $ docker run \ $ docker run \ --rm \ ←------ Remove ---------→ --rm \ --name clock \ on exit --name clock \ º-dº\ ← Daemon interactive →º-tiº\ mode mode jdeiviz/clock jdeiviz/clock Show container logs: $ docker logs docker $ logs --tail 3 $ docker logs --tail 1 --follow Stop container: $ docker stop # Espera 10s docker kill Prune stopped containers: $ docker container prune container help: $ docker containerMonitoring running containers
Monitoring (Basic)
List containers instances:
$ docker ps # only running
$ docker ps -a # also finished, but not yet removed (docker rm ...)
$ docker ps -lq # TODO:
"top" containers showing Net IO read/writes, Disk read/writes:
$ docker stats
| CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
| c420875107a1 postgres_trinity_cache 0.00% 11.66MiB / 6.796GiB 0.17% 22.5MB / 19.7MB 309MB / 257kB 16
| fdf2396e5c72 stupefied_haibt 0.10% 21.94MiB / 6.796GiB 0.32% 356MB / 693MB 144MB / 394MB 39
$ docker top 'containerID'
| UID PID PPID C STIME TTY TIME CMD
| systemd+ 26779 121423 0 06:11 ? 00:00:00 postgres: ddbbName cache 172.17.0.1(35678) idle
| ...
| systemd+ 121423 121407 0 Jul06 pts/0 00:00:44 postgres
| systemd+ 121465 121423 0 Jul06 ? 00:00:01 postgres: checkpointer process
| systemd+ 121466 121423 0 Jul06 ? 00:00:26 postgres: writer process
| systemd+ 121467 121423 0 Jul06 ? 00:00:25 postgres: wal writer process
| systemd+ 121468 121423 0 Jul06 ? 00:00:27 postgres: autovacuum launcher process
| systemd+ 121469 121423 0 Jul06 ? 00:00:57 postgres: stats collector process
SysDig
Container-focused Linux troubleshooting and monitoring tool.
Once Sysdig is installed as a process (or container) on the server,
it sees every process, every network action, and every file action
on the host. You can use Sysdig "live" or view any amount of historical
data via a system capture file.
Example: take a look at the total CPU usage of each running container:
$ sudo sysdig -c topcontainers\_cpu
| CPU% container.name
| ----------------------------------------------------
| 80.10% postgres
| 0.14% httpd
| ...
|
Example: Capture historical data:
$ sudo sysdig -w historical.scap
Example: "Zoom into a client":
$ sudo sysdig -pc -c topprocs\_cpu container. name=client
| CPU% Process container.name
| ----------------------------------------------
| 02.69% bash client
| 31.04%curl client
| 0.74% sleep client
Dockviz
@[https://github.com/justone/dockviz]
Show a graph of running containers dependencies and
image dependencies.
Other options:
$ºdockviz images -tº
└─511136ea3c5a Virtual Size: 0.0 B
├─f10ebce2c0e1 Virtual Size: 103.7 MB
│ └─82cdea7ab5b5 Virtual Size: 103.9 MB
│ └─5dbd9cb5a02f Virtual Size: 103.9 MB
│ └─74fe38d11401 Virtual Size: 209.6 MB Tags: ubuntu:12.04, ubuntu:precise
├─ef519c9ee91a Virtual Size: 100.9 MB
└─02dae1c13f51 Virtual Size: 98.3 MB
└─e7206bfc66aa Virtual Size: 98.5 MB
└─cb12405ee8fa Virtual Size: 98.5 MB
└─316b678ddf48 Virtual Size: 169.4 MB Tags: ubuntu:13.04, ubuntu:raring
$ºdockviz images -t -l º← show only labelled images
└─511136ea3c5a Virtual Size: 0.0 B
├─f10ebce2c0e1 Virtual Size: 103.7 MB
│ └─74fe38d11401 Virtual Size: 209.6 MB Tags: ubuntu:12.04, ubuntu:precise
├─ef519c9ee91a Virtual Size: 100.9 MB
│ └─a7cf8ae4e998 Virtual Size: 171.3 MB Tags: ubuntu:12.10, ubuntu:quantal
│ ├─5c0d04fba9df Virtual Size: 513.7 MB Tags: nate/mongodb:latest
│ └─f832a63e87a4 Virtual Size: 243.6 MB Tags: redis:latest
└─02dae1c13f51 Virtual Size: 98.3 MB
└─316b678ddf48 Virtual Size: 169.4 MB Tags: ubuntu:13.04, ubuntu:raring
$ºdockviz images -tº-i º ← Show incremental size rather than cumulative
└─511136ea3c5a Virtual Size: 0.0 B
├─f10ebce2c0e1 Virtual Size: 103.7 MB
│ └─82cdea7ab5b5 Virtual Size: 255.5 KB
│ └─5dbd9cb5a02f Virtual Size: 1.9 KB
│ └─74fe38d11401 Virtual Size: 105.7 MB Tags: ubuntu:12.04, ubuntu:precise
└─02dae1c13f51 Virtual Size: 98.3 MB
└─e7206bfc66aa Virtual Size: 190.0 KB
└─cb12405ee8fa Virtual Size: 1.9 KB
└─316b678ddf48 Virtual Size: 70.8 MB Tags: ubuntu:13.04, ubuntu:raring
Weave cAdvisor+Prometheus+Grafana @[https://blog.couchbase.com/monitoring-docker-containers-docker-stats-cadvisor-universal-control-plane/] @[https://dzone.com/refcardz/intro-to-docker-monitoring?chapter=6] @[https://github.com/google/cadvisor/blob/master/docs/running.md#standalone]
Managing images
(List all image related commands with: $ docker image)
$ docker images # ← List local ("downloaded/instaled") images
$ docker search redis # ← Search remote images @ Docker Hub:
$ docker rmi /${IMG_NAME}:${IMG_VER} # ← remove (local) image
$ docker image prune # ← removeºallºnon used images
-ºPUSH/PULL Images from Private Registry:º
-ºPRE-SETUP:º(Optional opinionated, but recomended)
Define ENV. VARS. in BºENVIRONMENTº file
$ catBºENVIRONMENTº
# COMMON ENV. PARAMS for PRIVATE/PUBLIC REGISTRY: {{
USER=user01
IMG_NAME="postgres_custom"
IMG_VER="1.0" # ← Defaults to 'latest'
# }}
# PRIVATE REGISTRY ENV. PARAMS ONLY : {{
SESSION_TOKEN="dAhYK9Z8..." # ← Updated Each 'N' hours
REGISTRY=docker_registry.myCompany.com
# }}
-ºUPLOAD IMAGEº
ºALT1: UPLOAD TO PRIVATE REGISTRY:º │ ºALT2: UPLOAD TO DOCKER HUB:º
$ cat push_image_to_private_registry.sh │ $ cat push_image_to_dockerhub_registry.sh
#!/bin/bash │ #!/bin/bash
set -e # ← stop on first error │ set -e # ← stop on first error
.BºENVIRONMENTº │ .BºENVIRONMENTº
│
sudo dockerºloginº\ │ sudo dockerºloginº\
-u ${LOGIN_USER} \ │ -u ${LOGIN_USER} \
-p ${SESSION_TOKEN} \ │
${REGISTRY} │
│
sudo dockerºpushº \ │ sudo dockerºpushº \
${REGISTRY}/${USER}/\ │ /\
/${IMG_NAME}:${IMG_VER} │ /${IMG_NAME}:${IMG_VER}
-ºDOWNLOAD IMAGEº
ºALT1: DOWNLOAD FROM PRIVATE REGISTRY:º │ ºALT2: DOWNLOAD FROM DOCKER HUB:º
$ docker pull \ │ $ docker pull \
${REGISTRY}/${USER}/\ │ \
${IMG_NAME}:${IMG_VER} │ ${IMG_NAME}:${IMG_VER}
Build image
72.7 MB layer ←→ FROM registry.redhat.io/ubi7/ubi Put most frequently changed layer
40.0 MB layer ←→ COPY target/dependencies /app/dependencies down the layer "stack", so that
9.0 MB layer ←→ COPY target/resources /app/resources when uploading new images only it
0.5 MB layer ←→ COPY target/classes /app/classes ← will be uploaded. Probably the most
frequently changed layer is also
the smaller layer
ENTRYPOINT java -cp \
/app/dependencies/*:/app/resources:/app/classes \
my.app.Main
$ docker build \
--build-arg http_proxy=http://...:8080 \
--build-arg https_proxy=https://..:8080 \
-t figlet .
$ cat ./Dockerfile
FROM ubuntu
RUN apt-get update
# Instalar figlet
ENTRYPOINT ["figlet", "-f", "script"]
Note: Unless you tell Docker otherwise, it will do as little work as possible when
building an image. It caches the result of each build step of a Dockerfile that
it has executed before and uses the result for each new build.
RºWARN:º
If a new version of the base image you’re using becomes available that
conflicts with your app, however, you won’t notice that when running the tests in
a container using an image that is built upon the older, cached version of the base image.
BºYou can force build to look for newer verions of base image "--pull" flagº.
Because new base images are only available once in a while, it’s not really
wasteful to use this argument all the time when building images.
(--no-cache can also be useful)
Image tags
adding a tag to the image essentially adds an alias
The tags consists of:
'registry_server'/'user_name'/'image_name':'tag'
^^^^^^^^^^^^^^^^^
default one if not
indicated
Tag image:
$ docker tag jdeiviz/clock /clock:1.0
Future Improvements
@[https://lwn.net/Articles/788282/]
- "Rethinking container image delivery"
Container images today are mostly delivered via container registries,
like Docker Hub for public access, or an internal registry deployment
within an organization. Crosby explained that Docker images are
identified with a name, which is basically a pointer to content in a
given container registry. Every container image comes down to a
digest, which is a content address hash for the JSON files and layers
contained in the image. Rather than relying on a centralized registry
to distribute images, what Crosby and Docker are now thinking about
is an approach whereby container images can also be accessed and
shared via some form of peer-to-peer (P2P) transfer approach across
nodes.
Crosby explained that a registry would still be needed to handle the
naming of images, but the content address blobs could be transferred
from one machine to another without the need to directly interact
with the registry. In the P2P model for image delivery, a registry
could send a container image to one node, and then users could share
and distribute images using something like BitTorrent sync. Crosby
said that, while container development has matured a whole lot since
2013, there is still work to be done. "From where we've been over the
past few years to where we are now, I think we'll see a lot of the
same type of things and we'll still focus on stability and
performance," he said.
Dockerfile (Image creation)
Dockerfile ARG vs ENV @[https://vsupalov.com/docker-arg-vs-env/] ARG : Vaules are consumed/used at build time. Not available at runtime. ENV : Values are consumed/used at runtime by final app. ARG and ENV can be combined to provide default ENV (Runtime) values to apps like: ARG buildAppParam1=default_value ENV appParam1=$buildAppParam1 ENTRYPOINT vs COMMAND Extracted from: https://stackoverflow.com/questions/21553353/what-is-the-difference-between-cmd-and-entrypoint-in-a-dockerfile - Docker default entrypoint is /bin/sh -c. - ENTRYPOINT allows to override the default. - $ docker --entrypoint allows to override effective entrypoint. (ºENTRYPOINT is (purposely) more difficult to overrideº) - ENTRYPOINT is similar to the "init" process in Linux. It is the first command to be executed. Command are the params passed to the ENTRYPOINT. - There is no default command (to be executed by the entrypoint). It must be indicated either as: $ docker run -i -t ubuntu bash └─┬─┘ /bin/sh -c bash will be executed. └───┬─────┘ Or non-default entrypoint BºAs everything is passed to the entrypoint, very nice behavior appearsº: They will act as binary executables: Ex. If using ENTRYPOINT ["/bin/cat"] then $ ALIAS CAT="docker run myImage" $ CAT /etc/passwd └┬┘ will effectively execute next command on container image: ┌──┴────┐ $ /bin/cat /etc/passwd Ex. If using ENTRYPOINT ["redis", "-H", "something", "-u", "toto"] will be equivalent to executing redis with default params $ docker run redisimg get key 101 ┌ Dockerfile.base ──────────────────┐ │ FROM node:7.10-alpine │ │ │ │ RUN mkdir /src │ │ WORKDIR /src │ │ COPY package.json /src ← *1 │ RUN npm install ← *2 │ │ ┌ Dockerfile.child ─┐ │ ONBUILD ARG NODE_ENV ← *4 │ FROM node-base │ │ ONBUILD ENV NODE_ENV $NODE_ENV │ │ │ │ │ │ EXPOSE 8000 │ │ CMD [ "npm", "start" ] │ │ COPY . /src ← *3 └───────────────────────────────────┘ └───────────────────┘ # STEP 1: Compile base image # STEP 2: Compile child image ←º*4º $º$ docker build -t node-base \ º $º$ docker build -t node-child \ º $º -f Dockerfile.base . º $º -f Dockerfile.child \ º $º --build-arg NODE_ENV=... . º # STEP 3: Test $º$ docker run -p 8000:8000 -d node-child º º*1º Modifications in package.json will force rebuild from there triggering a new npm install on next step. RºWARN:ºIf the package.json is put after npm install then no npm install will be executed since Docker will not detect any change. º*2º slow process that doesn't change put before "moving parts" to avoid (but after copying any file that indicates that a new npm install must be triggered - package.json, package-lock.json, maybe "other") º*3º source code, images, CSS, ... will change frequently during development. Put in last position (top layer in image) so that new modification triggers just rebuild of last layer. º*4º Modify base image adding "ONBUILD" in places that are executed just during build in the image extending base image
MultiStage
@[https://docs.docker.com/develop/develop-images/multistage-build/]
- Example 1: Go multistage build:
┌─ Dockerfile.multistage ───────────────┐ Stage 1:
│ FROM ºgolang-1.14:alpineº AS ºbuildº ← Base Image with compiler, utility libs, ...
│ ADD . /src │ ( Maybe "hundreds" of MBs)
│ RUN cd /src ; go build Oº-o appº ← Let's Build final Oºexecutableº
│ │
│ │ Stage 2:
│ FROM ºalpine:1.14º ← Clean minimal image (maybe just ~4MB).
│ WORKDIR /app │
│ COPYº--from=buildº Oº/src/appº /app/ ← Copy Oºexecutableº to final image
│ ENTRYPOINT ./app │
└───────────────────────────────────────┘
$º$ docker build . -f Dockerfile.multistage \ º Build image from multistage Dockerfile
$º -t ${IMAGE_TAG} º
$º$ docker run --rm -ti ${IMAGE_TAG} º Finally Test it.
- Ex 2: Multi-stage NodeJS Build
• PRESETUP:
- Check with $º$ npm list --depth 3 º duplicated or similar dependencies.
Try to fix manually in package.json
- npm audit (See also online services like https://snyk.io,...)
- Avoid leaking dev ENV.VARs/secrets/...:
Alt 1: Alt 2: (Safer)
┌─ .dockerignore ────────┐ ┌─ .dockerignore ────────┐
│ + node_modules/ │ │ # ignore by default │
│ + .npmrc ← º*1º│ │ * ← Now it's safe to just:
│ + .env │ │ │ COPY . /my/workDir
│ + .... │ │ !/docker-entrypoint.sh ← Explicetely mark what we want to copy.
└────────────────────────┘ │ !/another/file.txt ←
└────────────────────────┘
┌─────────────────────────────────────────┐ STAGE 1:
│ FROM node:14.2.0-alpine3.11 AS build ← DONT's undeterministic version (e.g: node, node:14-alpine,...)
│ │ sha256 can also be use to lock to precise version.
│ │ node:lts-alpine@sha256:aFc342a...
│ │
│ ADD . / app01_src/ │
│ │ @[https://docs.npmjs.com/cli/v7/commands/npm-ci]
│ RUN npm ci --only=production ← ci: Similar to npm install by optimized for Continuous Integrations
│ │ Significantly faster when:
│ │ - There is a package-lock.json | npm-shrinkwrap.json file.
│ │ - node_modules/ folder is missing|empty.
│ │ --only=production: Skip non production dependencies (testing,..)
│ │ WARN: Avoid npm install (yarn install)
│ │
│ FROM node:16.10.0-alpine3.13 ← We can not just use FROM:alpine:3.13. In node we still need the
│ │ "big" image, since output artifacts are not self executables.
│ RUN mkdir /app │
│ WORKDIR /app │ We can still save some space removing un-needed sources.
│ USER node ← Avoid root
│ COPY º--from=buildº --chown=node:node \ ← Forget source, ... Keep only Oº"dist/"º executable and
│ /app01_src/dist /app │ (unfortunatelly) keep also the big (tens/hundreds of MBs)
│ COPY º--from=buildº --chown=node:node \ │ node_modules/ folder, still needed in production.
│ /app01_src/node_modules \ │
│ /app/node_modules │
│ │
│ ENV NODE_ENV production ← Some libs only enable production optimization with the var. is set
│ │
│ │
│ ENTRYPOINT ["node", "/app/dist/cli.js"] ← TODO: Check "dumb-init" alternative.
└─────────────────────────────────────────┘
NOTE: to handle with OS signals add some code like:
async function handleSigInt(signal) {
await fastify.close()
process.exit()
}
process.on('SIGINT', handleSigInt)
Sometimes npmrc can contain secrets.
Distroless - "Distroless" images contain only your application and its runtime dependencies. (not package managers, shells,...) Notice: In kubernetes we can also use init containers with non-light images containing all set of tools (sed, grep,...) for pre-setup, avoiding any need to include in the final image. Stable: experimental (2019-06) gcr.io/distroless/static gcr.io/distroless/python2.7 gcr.io/distroless/base gcr.io/distroless/python3 gcr.io/distroless/java gcr.io/distroless/nodejs gcr.io/distroless/cc gcr.io/distroless/java/jetty gcr.io/distroless/dotnet Ex java Multi-stage Dockerfile: @[https://github.com/GoogleContainerTools/distroless/blob/master/examples/java/Dockerfile] ºFROMºopenjdk:11-jdk-slim ASOºbuild-envº ADD . /app/examples WORKDIR /app RUN javac examples/*.java RUN jar cfe main.jar examples.HelloJava examples/*.class FROM gcr.io/distroless/java:11 COPY --from=Oºbuild-envº /app /app WORKDIR /app CMD ["main.jar"]
CNCF Buildpacks.io • Build OCI image directly from source source. • Used, among others, by Spring (Spring Boot 2.3+) with [JAVA] plugins for Gradle and Maven. e.g.: SpringBoot gradle integration: bootBuildImage { imageName = "${docker.username}/${project.name}:${project.version}" environment = ["BP_JVM_VERSION" : "11.*"] } • Promotes best practices in terms of security; • defining CPU and memory limits for JVM containers is critical because they will be used to size properly items like JVM thread pools, heap memory and non-heap memory. Tunning manually is challenging, Fortunately, if using Paketo implementation of Cloud Native Buildpacks (included for example in Spring Boot), Java Memory Calculator is included automatically and the component will configure JVM memory based on the resource limits assigned to the container. Otherwise, results are unpredictable. • No need to write the Dockerfile. • Highly modular and customizable. rootless Buildah @[https://opensource.com/article/19/3/tips-tricks-rootless-buildah] • Building containers in unprivileged environments • library+tool for building OCI images. • complementary to Podman.
Build speed @[https://www.redhat.com/sysadmin/speeding-container-buildah] • This article will address a second problem with build speed when using dnf/yum commands inside containers. Note that in this article I will use the name dnf (which is the upstream name) instead of what some downstreams use (yum) These comments apply to both dnf and yum.
Appsody (prebuilt images)
@[https://appsody.dev/docs]
- pre-configured application stacks for rapid development
of quality microservice-based applications.
- Stacks include language runtimes, frameworks, and any additional
libraries and tools needed for local development, providing
consistency and best practices.
- It consists of:
-ºbase-container-imageº:
- local development
- It defines the environment and specifies the stack behavior
during the development lifecycle of the application.
-ºProject templatesº
- starting point ('Hello World')
- They can be customized/shared.
- Stack layout example, my-stack:
my-stack
├── README.md # describes stack and how to use it
├── stack.yaml # different attributes and which template
├── image/ # to use by default
| ├── config/
| | └── app-deploy.yaml # deploy config using Appsody Operator
| ├── project/
| | ├── php/java/...stack artifacts
| | └── Dockerfile # Final (run) image ("appsody build")
│ ├── Dockerfile-stack # Initial (dev) image and ENV.VARs
| └── LICENSE # for local dev.cycle. It is independent
└── templates/ # of Dockerfile
├── my-template-1/
| └── "hello world"
└── my-template-2/
└── "complex application"
BºGenerated filesº
-º".appsody-config.yaml"º. Generated by $º$ appsody initº
It specifies the stack image used and can be overridden
for testing purposes to point to a locally built stack.
Bºstability levels:
-ºExperimentalº ("proof of concept")
- Support appsody init|run|build
-ºIncubatorº: not production-ready.
- active contributions and reviews by maintainers
- Support appsody init|run|build|test|deploy
- Limitations described in README.md
-ºStableº: production-ready.
- Support all Appsody CLI commands
- Pass appsody stack 'validate' and 'integration' tests
on all three operating systems that are supported by Appsody
without errors.
Example:
- stack must not bind mount individual files as it is
not supported on Windows.
- Specify the minimum Appsody, Docker, and Buildah versions
required in the stack.yaml
- Support appsody build command with Buildah
- Prevent creation of local files that cannot be removed
(i.e. files owned by root or other users)
- Specify explicit versions for all required Docker images
- Do not introduce any version changes to the content
provided by the parent container images
(No yum upgrade, apt-get dist-upgrade, npm audit fix).
- If package contained in the parent image is out of date,
contact its maintainers or update it individually.
- Tag stack with major version (at least 1.0.0)
- Follow Docker best practices, including:
- Minimise the size of production images
- Use the official base images
- Images must not have any major security vulnerabilities
- Containers must be run by non-root users
- Include a detailed README.md, documenting:
- short description
- prerequisites/setup required
- How to access any endpoints provided
- How users with existing projects can migrate to
using the stack
- How users can include additional dependencies
needed by their application
BºOfficial Appsody Repositories:º
https://github.com/appsody/stacks/releases/latest/download/stable-index.yaml
https://github.com/appsody/stacks/releases/latest/download/incubator-index.yaml
https://github.com/appsody/stacks/releases/latest/download/experimental-index.yaml
- By default, Appsody comes with the incubator and experimental repositories
(RºWARNº: Not stable by default). Repositories can be added by running :
$º$ appsody repoº
alpine how-to Next image (golang) is justº6Mbytesºin size: @[https://hub.docker.com/r/ethereum/solc/dockerfile] Dockerfile: 01 FROM alpine 02 MAINTAINER chriseth03 04 RUN \ 05 apk --no-cache --update add build-base cmake boost-dev git ⅋⅋ \ 06 sed -i -E -e 's/include ˂sys\/poll.h˃/include ˂poll.h˃/' /usr/include/boost/asio/detail/socket_types.hpp ⅋⅋ \ 07 git clone --depth 1 --recursive -b release https://github.com/ethereum/solidity ⅋⅋ \ 08 cd /solidity ⅋⅋ cmake -DCMAKE_BUILD_TYPE=Release -DTESTS=0 -DSTATIC_LINKING=1 ⅋⅋ \ 09 cd /solidity ⅋⅋ make solc ⅋⅋ install -s solc/solc /usr/bin ⅋⅋\ 10 cd / ⅋⅋ rm -rf solidity ⅋⅋ \ 11 apk del sed build-base git make cmake gcc g++ musl-dev curl-dev boost-dev ⅋⅋ \ 12 rm -rf /var/cache/apk/* Notes: - line 07: º--depth 1º: faster cloning (just last commit) - line 07: the cloned repo contains next º.dockerignoreº: 01 # out-of-tree builds usually go here. This helps improving performance of uploading 02 # the build context to the docker image build server 03*/build* 04 05 # in-tree builds 06*/deps*
Troubleshooting Bº/var/lib/docker/devicemapper/devicemapper/data consumes too much spaceº $º$ sudo du -sch /var/lib/docker/devicemapper/devicemapper/dataº $º14G /var/lib/docker/devicemapper/devicemapper/data º [REF@StackOverflow] BºDNS works on host, fails on continers:º Try to launch with --network host flag. Ex.: ... DOCKER_OPTS="${DOCKER_OPTS} º--network hostº" SCRIPT="wget https://repo.maven.apache.org/maven2" # ← DNS can fail with bridge echo "${MVN_SCRIPT}" | docker run ${DOCKER_OPTS} ${SCRIPT} BºInspecting Linux namespaces of running containerº Use nsenter (Bºutil-linuxº package) to "enter" into the container (network, filesystem, IPC, ...) namespace. $ cat enterNetworkNamespace.sh #!/bin/bash # REF: man nsenter # Run shell with network namespace of container. # Allows to use ping, ss/netstat, wget, trace,.. in # in contect of the container. # Useful to check network setup is the appropiate one. CONT_PID=$( sudo docker inspect -f '{{.State.Pid}}' $1 ) shift 1 sudoºnsenterº-t ${CONT_PID}º-nº ^^ Use network namespace of container Ex Ussage: $ ./enterNetworkNamespace.sh myWebContainer01 $ netstat -ntlp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN * netstat installed on host (vs container).
Live Restore
@[https://docs.docker.com/config/containers/live-restore/]
Keep containers alive during daemon downtime
weave
@[https://github.com/weaveworks/weave]
- Weaveworks is the company that delivers the most productive way for
developers to connect, observe and control Docker containers.
- This repository contains Weave Net, the first product developed by
Weaveworks, with over 8 million downloads to date. Weave Net enables
you to get started with Docker clusters and portable apps in a
fraction of the time required by other solutions.
- Weave Net
- Quickly, easily, and securely network and cluster containers
across any environment. Whether on premises, in the cloud, or hybrid,
there’s no code or configuration.
- Build an ‘invisible infrastructure’
- powerful cloud native networking toolkit. It creates a virtual network
that connects Docker containers across multiple hosts and enables their
automatic discovery. Set up subsystems and sub-projects that provide
DNS, IPAM, a distributed virtual firewall and more.
- Weave Scope:
- Understand your application quickly by seeing it in a real time
interactive display. Pick open source or cloud hosted options.
- Zero configuration or integration required — just launch and go.
- automatically detects processes, containers, hosts.
No kernel modules, agents, special libraries or coding.
- Seamless integration with Docker, Kubernetes, DCOS and AWS ECS.
- Cortex: horizontally scalable, highly available, multi-tenant,
long term storage for Prometheus.
- Flux:
- Flux is the operator that Bºmakes GitOps happen in your clusterº.
It ensures that the cluster config matches the one in git and
automates your deployments.
- continuous delivery of container images, using version control
for each step to ensure deployment is reproducible,
auditable and revertible. Deploy code as fast as your team creates
it, confident that you can easily revert if required.
Learn more about GitOps.
@[https://www.weave.works/technologies/gitops/]
Clair
@[https://coreos.com/clair/docs/latest/]
open source project for the static analysis of vulnerabilities in
appc and docker containers.
Vulnerability data is continuously imported from a known set of sources and
correlated with the indexed contents of container images in order to produce
lists of vulnerabilities that threaten a container. When vulnerability data
changes upstream, the previous state and new state of the vulnerability along
with the images they affect can be sent via webhook to a configured endpoint.
All major components can be customized programmatically at compile-time
without forking the project.
Skopeo
@[https://github.com/containers/skopeo]
@[https://www.redhat.com/en/blog/skopeo-10-released]
- Command line utility for moving/copying container images between different types
of container storages. (docker.io, quay.io, internal container registry
local storage repository or even directly into a Docker daemon).
- It does not require root permissions (for most of its operations)
or even a docker daemon.
- Compatible with OCI images (standards) and original Docker v2 images.
Security Tunning
@[https://opensource.com/business/15/3/docker-security-tuning]
LazyDocker
@[https://github.com/jesseduffield/lazydocker]
A simple terminal UI for both docker and docker-compose, written in
Go with the gocui library.
Convoy (Volume Driver for backups)
@[https://rancher.com/introducing-convoy-a-docker-volume-driver-for-backup-and-recovery-of-persistent-data/]
Introducing Convoy a Docker Storage Driver for Backup and Recovery of Volumes
Podman
- No system daemon required.
- No daemon required.
- rootless containers
- Podman is set to be the default container engine for the single-node
use case in Red Hat Enterprise Linux 8.
(CRI-O for OpenShift clusters)
- easy to use and intuitive.
- Most users can simply alias Docker to Podman (alias docker=podman)
-$º$ podman generate kubeº creates a Pod that can then be exported as Kubernetes-compatible YAML.
- enables users to run different containers in different user namespaces
- Runs at native Linux speeds.
(no daemon getting in the way of handling client/server requests)
- OCI compliant Container Runtime (runc, crun, runv, etc)
to interface with the OS.
- Podman libpod library manages container ecosystem:
- pods.
- containers.
- container images (pulling, tagging, ...)
- container volumes.
Introduction
$º$ podman search busybox º
→ INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED
→ docker.io docker.io/library/busybox Busybox base image. 1882 [OK]
→ docker.io docker.io/radial/busyboxplus Full-chain, Internet... 30 [OK]
→ ...
$º$ podman run -it docker.io/library/busybox º
$º/ # º
$º$ URL="https://raw.githubusercontent.com/nginxinc/docker-nginx"º
$º$ URL="${URL}/594ce7a8bc26c85af88495ac94d5cd0096b306f7/ "º
$º$ URL="${URL}/mainline/buster/Dockerfile "º
$º$ podman build -t nginx ${URL} º ← build Nginx web server using
└─┬─┘ official Nginx Dockerfile
└────────┐
┌─┴─┐
$º$ podman run -d -p 8080:80 nginx º ← run new image from local cache
└─┬─┘└┘
│ ^Port Declared @ Dockerfile
Effective
(Real)port
- To make it public publish to any other Register compatible with the
BºOpen Containers Initiative (OCI) formatº. The options are:
- Private Register:
- Public Register:
- quay.io
- docker.io
$º$ podman login quay.io º ← Login into quay.io
$º$ podman tag localhost/nginx quay.io/${USER}/nginxº ← re-tag the image
$º$ podman push quay.io/${USER}/nginx º ← push the image
→ Getting image source signatures
→ Copying blob 38c40d6c2c85 done
→ ..
→ Writing manifest to image destination
→ Copying config 7f3589c0b8 done
→ Writing manifest to image destination
→ Storing signatures
$º$ podman inspect quay.io/${USER}/nginx º ← Inspect image
→ [
→ {
→ "Id": "7f3589c0b8849a9e1ff52ceb0fcea2390e2731db9d1a7358c2f5fad216a48263",
→ "Digest": "sha256:7822b5ba4c2eaabdd0ff3812277cfafa8a25527d1e234be028ed381a43ad5498",
→ "RepoTags": [
→ "quay.io/USERNAME/nginx:latest",
→ ...
Podman commands
@[https://podman.readthedocs.io/en/latest/Commands.html]
BºImage Management:º
build Build an image using instructions from Containerfiles
commit Create new image based on the changed container
history Show history of a specified image
image
└ build Build an image using instructions from Containerfiles
exists Check if an image exists in local storage
history Show history of a specified image
prune Remove unused images
rm Removes one or more images from local storage
sign Sign an image
tag Add an additional name to a local image
tree Prints layer hierarchy of an image in a tree format
trust Manage container image trust policy
images List images in local storage ( == image list)
inspect Display the configuration of a container or image ( == image inspect)
pull Pull an image from a registry (== image pull)
push Push an image to a specified destination (== image push)
rmi Removes one or more images from local storage
search Search registry for image
tag Add an additional name to a local image
BºImage Archive/Backups:º
import Import a tarball to create a filesystem image (== image import)
load Load an image from container archive ( == image load)
save Save image to an archive ( == image save)
BºPod Control:º
attach Attach to a running container ( == container attach)
containers Management
└ cleanup Cleanup network and mountpoints of one or more containers
commit Create new image based on the changed container
exists Check if a container exists in local storage
inspect Display the configuration of a container or image
list List containers
prune Remove all stopped containers
runlabel Execute the command described by an image label
BºPod Checkpoint/Live Migration:º
container checkpoint Checkpoints one or more containers
container restore Restores one or more containers from a checkpoint
$º$ podman container checkpoint $container_id\ º← Checkpoint and prepareºmigration archiveº
$º -e /tmp/checkpoint.tar.gz º
$º$ podman container restore \ º← Restore from archive at new server
$º -i /tmp/checkpoint.tar.gz º
create Create but do not start a container ( == container create)
events Show podman events
exec Run a process in a running container ( == container exec)
healthcheck Manage Healthcheck
info Display podman system information
init Initialize one or more containers ( == container init)
kill Kill one or more running containers with a specific signal ( == container kill)
login Login to a container registry
logout Logout of a container registry
logs Fetch the logs of a container ( == container logs)
network Manage Networks
pause Pause all the processes in one or more containers ( == container pause)
play Play a pod
pod Manage pods
port List port mappings or a specific mapping for the container ( == container port)
ps List containers
restart Restart one or more containers ( == container restart)
rm Remove one or more containers ( == container rm)
run Run a command in a new container ( == container run)
start Start one or more containers ( == container start)
stats Display a live stream of container resource usage statistics (== container stats)
stop Stop one or more containers ( == container stop)
system Manage podman
top Display the running processes of a container ( == container top)
unpause Unpause the processes in one or more containers ( == container unpause)
unshare Run a command in a modified user namespace
version Display the Podman Version Information
volume Manage volumes
wait Block on one or more containers ( == container wait)
BºPod Control: File systemº
cp Copy files/folders container ←→ filesystem (== container cp)
diff Inspect changes on container’s file systems ( == container diff)
export Export container’s filesystem contents as a tar archive ( == container export )
mount Mount a working container’s root filesystem ( == container mount)
umount Unmounts working container’s root filesystem ( == container mount)
BºPod Integrationº
generate Generated structured data
kube kube Generate Kubernetes pod YAML from a container or pod
systemd systemd Generate a BºSystemD unit fileº for a Podman container
SystemD Integration
https://www.redhat.com/sysadmin/improved-systemd-podman
- auto-updates help to make managing containers even more straightforward.
- SystemD is used in Linux to managing services (background long-running jobs listening for client requests) and their dependencies.
BºPodman running SystemD inside a containerº
└ /run ← tmpfs
/run/lock ← tmpfs
/tmp ← tmpfs
/var/log/journald ← tmpfs
/sys/fs/cgroup (configuration)(depends also on system running cgroup V1/V2 mode).
└───────┬───────┘
Podman automatically mounts next file-systems in the container when:
- entry point of the container is either º/usr/sbin/init or /usr/sbin/systemdº
-º--systemd=alwaysºflag is used
BºPodman running inside SystemD servicesº
- SystemD needs to know which processes are part of a service so it
can manage them, track their health, and properly handle dependencies.
- This is problematic in Docker (according to RedHat rival) due to the
server-client architecture of Docker:
- It's practically impossible to track container processes, and
pull-requests to improve the situation have been rejected.
- Podman implements a more traditional architecture by forking processes:
- Each container is a descendant process of Podman.
- Features like sd-notify and socket activation make this integration
even more important.
- sd-notify service manager allows a service to notify SystemD that
the process is ready to receive connections
- socket activation permits SystemD to launch the containerized process
only when a packet arrives from a monitored socket.
- Compatible with audit subsystem (track records user actions).
- the forking architecture allows systemd to track processes in a
container and hence opens the door for seamless integration of
Podman and systemd.
$º$ podman generate systemd --new $containerº ← Auto-generate containerized systemd units:
└─┬─┘
Ohterwise it will be tied to creating host
- Pods are also supported in Podman 2.0
Container units that are part of a pod can now be restarted.
especially helpful for auto-updates.
BºPodman auto-update (1.9+)º
- To use auto-updates:
- containers must be created with :
--label "io.containers.autoupdate=image"
- run in a SystemD unit generated by
$ podman generate systemd --new.
$º$ podman auto-update º ← Podman will first looks up running containers with the
"io.containers.autoupdate" label set to "image" and then
query the container registry for new images.
$ºIf that's the case Podman restarts the corresponding º
$ºSystemD unit to stop the old container and create a º
$ºnew one with the modified image. º
(still marked as experimental while collecting user feedback)
Setup Insec. HTTP registry
@[https://www.projectatomic.io/blog/2018/05/podman-tls/]
/etc/containers/registries.conf.
# This is a system-wide configuration file used to
# keep track of registries for various container backends.
# It adheres to TOML format and does not support recursive
# lists of registries.
[registries.search]
registries = ['docker.io', 'registry.fedoraproject.org', 'registry.access.redhat.com']
# If you need to access insecure registries, add the registry's fully-qualified name.
# An insecure registry is one that does not have a valid SSL certificate or only does HTTP.
[registries.insecure]
Bºregistries = ['localhost:5000']º
Protecting against Doki malware
https://containerjournal.com/topics/container-security/protecting-containers-against-doki-malware/
2+Millions images with Critical Sec.Holes
https://www.infoq.com/news/2020/12/dockerhub-image-vulnerabilities/
OpenSCAP: Scanning Vulnerabilities
- Scanning Containers for Vulnerabilities on RHEL 8.2 With OpenSCAP and Podman:
@[https://www.youtube.com/watch?v=nQmIcK1vvYc]
Container Networking
@[https://jvns.ca/blog/2016/12/22/container-networking/]
By Julia Evans
""" There are a lot of different ways you can network containers
together, and the documentation on the internet about how it works is
often pretty bad. I got really confused about all of this, so I'm
going to try to explain what it all is in laymen's terms. """
Bºwhat even is container networking?º
When a program in a container, you have two main options:
- run app in host network namespace. (normal networking)
"host_ip":"app_port"
- run the program in its ownºnetwork namespaceº:
RºIt turns out that this problem of how to connect º
Rºtwo programs in containers together has a ton of º
Rºdifferent solutions. º
- "every container gets an IP". (k8s requirement)
"172.16.0.1:8080" // Tomcat continer app 1
"172.16.0.2:5432" // PostgreSQL container app1
"172.17.0.1:8080" // Tomcat continer app 2
...
└───────┬───────┘
any other program in the cluster will target those IP:port
Instead of single-IP:"many ports" we have "many IPs":"some ports"
Q: How to get many IPs in a single host?
- Host IP: 172.9.9.9
- Container private IP: 10.4.4.4
- To route from 10.4.4.4 to 172.9.9.9:
- Alt1: Configure Linux routing tables
$º$ sudo ip route add 10.4.4.0/24 via 172.23.1.1 dev eth0º
- Alt2: Use AWS VPC Route tables
- Alt3: Use Azure ...
BºEncapsulating to other networks:º
LOCAL NETWORK REMOTE NETWORK
(encapsulation)
IP: 10.4.4.4 IP: 172.9.9.9
TCP stuff (extra wrapper stuff)
HTTP stuff IP: 10.4.4.4
TCP stuff
HTTP stuff
- 2 different ways of doing encapsulation:
- "ip-in-ip": add extra IP-header on top "current" IP header.
MAC: 11:11:11:11:11:11
IP: 172.9.9.9
IP: 10.4.4.4
TCP stuff
HTTP stuff
Ex:
$º$ sudo ip tunnel add mytun mode ipip \ º ← Create tunnel "mytun"
$º remote 172.9.9.9 local 10.4.4.4 ttl 255 º
$º sudo ifconfig mytun 10.42.1.1 º
$º$ sudo route add -net 10.42.2.0/24 dev mytun º ← set up a route table
$º$ sudo route list
- "vxlan": take whole packet
(including the MAC address) and wrap
it inside a UDP packet. Ex:
MAC address: 11:11:11:11:11:11
IP: 172.9.9.9
UDP port 8472 (the "vxlan port")
MAC address: ab:cd:ef:12:34:56
IP: 10.4.4.4
TCP port 80
HTTP stuff
-BºEvery container networking "thing" runs some kind of daemon program º
Bºon every box which is in charge of adding routes to the route table.º
Bºfor automatic route configuration.º
- Alt1: routes are in etcd cluster, and program talks to the
etcd cluster to figure out which routes to set.
- Alt2: use BGP protocol to gossip to each other about routes,
and a daemon (BIRD) that listens for BGP messages on
every box.
BºQ: How does that packet actually end up getting to your container program?º
A: bridge networking
- Docker/... creates fake (virtual) network interfaces for every
single one of your containers with a given IP address.
- The fake interfaces are bridges to a real one.
BºFlannel:º
- Supports vxlan (encapsulate all packets) and
host-gw (just set route table entries, no encapsulation)
- The daemon that sets the routes gets them ºfrom an etcd clusterº.
BºCalico:º
- Supports ip-in-ip encapsulation and
"regular" mode, (just set route table entries, no encaps.)
- The daemon that sets the routes gets them ºusing BGP messagesº
from other hosts. (etcd is not used for distributing routes).
CRI-O
CRI-O: container runtime for K8s / OpenShift.
OCI compliant Container Runtime Engines:
- Docker
- CRI-O
- containerd
Kaniko (rootless builds)
☞ NOTE: To build ºJAVA imagesº see also @[/JAVA/java_map.html?query=jib]
@[https://github.com/GoogleContainerTools/kaniko]
- tool to build container images inside an unprivileged container or
Kubernetes cluster.
- Although kaniko builds the image from a supplied Dockerfile, it does
not depend on a Docker daemon, and instead executes each command completely
in userspace and snapshots the resulting filesystem changes.
- The majority of Dockerfile commands can be executed with kaniko, with
the current exception of SHELL, HEALTHCHECK, STOPSIGNAL, and ARG.
Multi-Stage Dockerfiles are also unsupported currently. The kaniko team
have stated that work is underway on both of these current limitations.
Testcontainers
@[https://www.testcontainers.org/#who-is-using-testcontainers]
- Testcontainers is a Java library that supports JUnit tests,
providing lightweight, throwaway instances of common databases,
Selenium web browsers, or anything else that can run in a Docker
container.
- Testcontainers make the following kinds of tests easier:
- Data access layer integration tests: use a containerized instance
of a MySQL, PostgreSQL or Oracle database to test your data access
layer code for complete compatibility, but without requiring complex
setup on developers' machines and safe in the knowledge that your
tests will always start with a known DB state. Any other database
type that can be containerized can also be used.
- Application integration tests: for running your application in a
short-lived test mode with dependencies, such as databases, message
queues or web servers.
- UI/Acceptance tests: use containerized web browsers, compatible
with Selenium, for conducting automated UI tests. Each test can get a
fresh instance of the browser, with no browser state, plugin
variations or automated browser upgrades to worry about. And you get
a video recording of each test session, or just each session where
tests failed.
- Much more!
Testing Modules
- Databases
JDBC, R2DBC, Cassandra, CockroachDB, Couchbase, Clickhouse, DB2, Dynalite, InfluxDB, MariaDB, MongoDB,
MS SQL Server, MySQL, Neo4j, Oracle-XE, OrientDB, Postgres, Presto
- Docker Compose Module
- Elasticsearch container
- Kafka Containers
- Localstack Module
- Mockserver Module
- Nginx Module
- Apache Pulsar Module
- RabbitMQ Module
- Solr Container
- Toxiproxy Module
- Hashicorp Vault Module
- Webdriver Containers
Who is using Testcontainers?
- ZeroTurnaround - Testing of the Java Agents, micro-services, Selenium browser automation
- Zipkin - MySQL and Cassandra testing
- Apache Gora - CouchDB testing
- Apache James - LDAP and Cassandra integration testing
- StreamSets - LDAP, MySQL Vault, MongoDB, Redis integration testing
- Playtika - Kafka, Couchbase, MariaDB, Redis, Neo4j, Aerospike, MemSQL
- JetBrains - Testing of the TeamCity plugin for HashiCorp Vault
- Plumbr - Integration testing of data processing pipeline micro-services
- Streamlio - Integration and Chaos Testing of our fast data platform based on Apache Puslar, Apache Bookeeper and Apache Heron.
- Spring Session - Redis, PostgreSQL, MySQL and MariaDB integration testing
- Apache Camel - Testing Camel against native services such as Consul, Etcd and so on
- Infinispan - Testing the Infinispan Server as well as integration tests with databases, LDAP and KeyCloak
- Instana - Testing agents and stream processing backends
- eBay Marketing - Testing for MySQL, Cassandra, Redis, Couchbase, Kafka, etc.
- Skyscanner - Integration testing against HTTP service mocks and various data stores
- Neo4j-OGM - Testing new, reactive client implementations
- Lightbend - Testing Alpakka Kafka and support in Alpakka Kafka Testkit
- Zalando SE - Testing core business services
- Europace AG - Integration testing for databases and micro services
- Micronaut Data - Testing of Micronaut Data JDBC, a database access toolkit
- Vert.x SQL Client - Testing with PostgreSQL, MySQL, MariaDB, SQL Server, etc.
- JHipster - Couchbase and Cassandra integration testing
- wescale - Integration testing against HTTP service mocks and various data stores
- Marquez - PostgreSQL integration testing
- Transferwise - Integration testing for different RDBMS, kafka and micro services
- XWiki - Testing XWiki under all supported configurations
- Apache SkyWalking - End-to-end testing of the Apache SkyWalking,
and plugin tests of its subproject, Apache SkyWalking Python, and of
its eco-system built by the community, like SkyAPM NodeJS Agent
- jOOQ - Integration testing all of jOOQ with a variety of RDBMS
docker-compose: dev vs pro
https://stackoverflow.com/questions/60604539/how-to-use-docker-in-the-development-phase-of-a-devops-life-cycle/60780840#60780840
Modify your Compose file for production🔗
CRIU.org: Container Live Migration
@[https://criu.org/Main_Page]
CRIU: project to implement checkpoint/restore functionality for Linux.
Checkpoint/Restore In Userspace, or CRIU (pronounced kree-oo, IPA:
/krɪʊ/, Russian: криу), is a Linux software. It can freeze a
running container (or an individual application) and checkpoint its
state to disk. The data saved can be used to restore the application
Used for example to bootstrap JVMs in millisecs (vs secs)
@[/JAVA/java_map.html#?jvm_app_checkpoint]
and run it exactly as it was during the time of the freeze. Using
this functionality, application or container live migration,
snapshots, remote debugging, and many other things are now possible.
Avoid huge log dumps
https://devops.stackexchange.com/questions/12944/any-way-to-limit-docker-logs-output-by-default/12970#12970
- Problem Context:
- Container output huge logs (maybe gigabytes).
- $ docker logs 'container' knocks down the host server when output is processed.
- To limit docker logs, specify limits in docker daemon's config file like:
/etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
(then restart docker daemon after edit)
NOTE: maybe ulimit can fix it at global (Linux OS) scope.
ContainerCoreInterceptor
https://github.com/AmadeusITGroup/ContainerCoreInterceptor
GitHub - AmadeusITGroup/ContainerCoreInterceptor: Core_interceptor
can be used to handle core dumps in a dockerized environment. It
listens on the local docker daemon socket for events. When it
receives a die event it checks if the dead container produced any
core dump or java heap dump.
KVM Kata containers
@[https://katacontainers.io/]
- Security: Runs in a dedicated kernel, providing isolation of
network, I/O and memory and can utilize hardware-enforced isolation
with virtualization VT extensions.
- Compatibility: Supports industry standards including OCI container
format, Kubernetes CRI interface, as well as legacy virtualization
technologies.
- Performance: Delivers consistent performance as standard Linux
containers; increased isolation without the performance tax of
standard virtual machines.
- Simplicity: Eliminates the requirement for nesting containers
inside full blown virtual machines; standard interfaces make it easy
to plug in and get started.
avoid "sudo" docker
$º $ sudo usermod -a -G docker "myUser"º
$ newgrp docker (take new group without re-login)
test images in 0.5 secs
@[https://medium.com/@aelsabbahy/tutorial-how-to-test-your-docker-image-in-half-a-second-bbd13e06a4a9]
...When you’re done with this tutorial you’ll have a small YAML
file that describes your docker image’s desired state. This will
allow you to test this:
$ docker run -p 8080:80 nginx
With this:
$ dgoss run -p 8080:80 nginx
- Goss is a YAML based serverspec alternative tool for validating a
server’s configuration. It eases the process of writing tests by
allowing the user to generate tests from the current system state.
Once the test suite is written they can be executed, waited-on, or
served as a health endpoint.