External Links
Linux From Scratch
[Man Pages],
[Command-line Tools  Summary],
[Linux Standard Base],
Fedora Sys.Admin.Guide
RedHat Enterprise Linux official Doc
Who-is-Who
  (Forcibly incomplete but still quite pertinent list of core people and companies)

- Linus Torvalds:
  - He loves C++ and Microkernels, author of a Unix like
     hobbie project for x86. Nothing serious.
   @[http://www.linux-schule.net/linux/text34.html]
   @[https://en.wikipedia.org/wiki/Linus_Torvalds]
- Alan Cox
@[https://en.wikipedia.org/wiki/Alan_Cox]
- Ingo Molnár
  @[https://en.wikipedia.org/wiki/Ingo_Moln%C3%A1r]
  - Completely Fair Scheduler 
  - in-kernel TUX HTTP / FTP server.
  - thread handling enhancements
  - real-time preemption patch set
    (with Thomas Gleixner and others)
- Patrick Volkerding, creator of Slackware, the first Linux
  distribution.
@[https://en.wikipedia.org/wiki/Patrick_Volkerding]
- Marc Ewing, creator of Red Hat
@[https://en.wikipedia.org/wiki/Marc_Ewing]
- Robert Love
@[https://en.wikipedia.org/wiki/Robert_Love]
  Ximian, Linux Desktop Group, SuSe Novell, worked on GNOME,
  on 2007 joins Google to work on Android, engineered several
  kernel and system-level solutions like its novel shared memory
  subsystem, ashmem. From 2014 Love continues to work at Google
  as Director of Engineering for Search Infrastructure.
- Andries Brouwer
@[https://en.wikipedia.org/wiki/Andries_Brouwer#Linux_kernel]
- Bredan D.Gregg: Author of many great books and posts on
    Linux observability and monitoring
@[http://www.brendangregg.com/]
- Rusty Russell
@[https://en.wikipedia.org/wiki/Rusty_Russell]
  work on the Linux kernel's succesful networking subsystem
  (netfilter, iptables) and the Filesystem Hierarchy Standard.
- Many, many others.
Linux Basics
- Linux itself is just an OS kernel in charge of sharing the (limited) hardware
  resources amongst potentially many running tasks and users working simultaneously
  on the systems. More preciselly, the main tasks in charge of kernel control are:

  - Orchestate(schedule) how much time each running-task is allowed to
    run on each CPU before such task is put on stand-by to let
    another task proffit from such CPU.

  - Assign which process has access to the (limited)
    RAM memory on the system and move data on RAM used by stand-by processes
    to secondory storage (disks) in case of RAM space shortage.

  - Provide support for users and permissions, so that different users will
    be able to isolate and protect its data from other users.

  Kernel control is transparent to running tasks or processes, so user-space task
  will run with the perception that they run alone in its own CPU and with
  all available memory for themself. When the kernel puts them on-hold
  such tasks will be freezed and once restarted it will NOT notice any
  change to the state before being put on-hold. Only the kernel is aware of
  the complex trickeries needed to make tasks run in parallel and isolated
  from each other.

  Other important tasks offered by the kernel are:

 - Abstract the running hardware into standarized interfaces, so
   user application will not need to work differently with different hardware.
   For example an application will ask the kernel to write data to the disk and
   the kernel will take care of the internal difference between the miriads
   of diferent disk hardware technologies.

 - Provide an easy-to-use file-system to work with data on disk, so that apps
   can orginized data in terms of files and directories, instead of just
   bunch-of-bytes on the hard-disk.

 - Provide network communication and support for standard network protocols
   like TCP/UP, bluetooth, WiFI, ... so that each app does not need to reimplement
   them.

 - Provide mechanisms to allow two running tasks to communicate with each other
   at will.

ºKernel mode vs User modeº
- When the CPU is executing kernel code it's running with elevated privileges.
  The software has full control of the hardware and can do "anything" on
  the system and access all RAM, disk, network resources at will.

- Standard applications will run in user-mode and will have limited access
  to only the RAM memory assigned by the kernel. They will not be able to
  inspect the memory of other processes running on the system. In fact they
  are not aware of such memory due to the many trickeries done by the kernel
  to isolate each task.

ºFiles, files and more filesº
- Any running process needs some incomming data to work with and
  produced new data that must be stored somewhere.
   This data can be provided by some storage system (hard-disk,
  usb, tape, ...), arrive (continuosly) from the network,
  or be generated by another concurrent process.
   Linux (UNIX actually) treats all input data sources and
  output data sinks as º"file devices"º.
  Internally there can be many differences (block devices with
  random access vs char devices with just sequential access), but
  running processes mostly always use the file methaphor to access
  all of them.
   Any running process will have 3 devices available "for free":
  - STDIN : The standard input  file.
  - STDOUT: The standard output file
  - STDERR: The standard error  file
   The standard shell provides many utilities to juggle with those
  three standard files. In particular it allows to easely forward
  the STDOUT output from a running-process to the STDIN input of
  another running process using the "|" pipe syntax:
  $ command1 º|ºcommand2  # ← Send STDOUT output from command1 to
                                        STDIN   input   of command2

  STDOUT and STDERR by default are assigned to the running-process
  associated terminal (the console where the user has been logged).
  The shell allows also to redirect STDOUT/STDERR to any other
  file in our file system. Ex:
  $ command1 1˃output.log         2˃error.log
             ^^^^^^^^^^^^         ^^^^^^^^^^^
             redirects STDOUT(1)  redirects STDERR
             to output.log        to error.log

  $ command1 1˃output.log         2˃⅋1
             ^^^^^^^^^^^^         ^^^^
             redirects STDOUT(1)  redirects STDERR
             to output.log        to STDOUT (⅋1, aka output.log)
Process model
- Linux follows a parent-child process model.

- Once loaded and initialized during the boot process,
  the kernel will launch an initial user-space process in
  charge of reading system configuration and (re)start
  all other user-space processes that builds a running system.

- Normally this initial process is systemd in modern
  systems (or initd in older or embedded ones).

- Each process can optionally launch new children processes
  up to the resource limits established on the running system.

- By default a child-process inherits the same user (and so, permissions)
  than its parent process. Some processes like the remote
  login "sshd" service (ssh is an acronymn for secure-shell) will
  change the child-process user/permission to a less privileged
  account.

- A simplified process-tree of a running-system will look like:

    PROCESS                                       USER    PROCESS   PARENT-ID
                                                         UNIQUE-ID
    systemd·······································root       1         0
          └─crond·································root      23         1
          |-cupsd·································root      70         1
          |-rtkit-daemon··························rtkit    100         1
          |-sshd··································root      10         1
          |    └─sshd·····························mike     300        10
          |         └─bash························mike     301       300
          |              └─firefox················mike     302       301
          |-systemd·······························alice    705         1
          |       └─at-spi-bus-laun···············alice    706       705
          |       |···············└─dbus-daemon···alice    707       706
          |       |-gnome-terminal················alice    883       705
          |                       ─bash-+·········alice    884       883
          |                             └─top·····alice    885       884
          |-systemd-journal·······················root      10         1
          ...
    Notice for example that the same process "bash" runs as a user or another
    (Gºmikeº or Qºaliceº) depending on the "path"
    followed until the process is executed.

    - The initial sshd running as root user, will span a new sshd child process
      with restricted Gº"mike"º privileges/permissions once the user
      has introduced its correct user and password in the remote ssh session, and
      from there on, all children will just be able to run with Gº"mike"º
      privileges/permissions.

    - Similarly the root systemd process will span a new child process will
      restricted Qº"alice"º privileges/permissions once logged in
      the local console, and from there on, all children will just be able to
      run with Qº"alice"º privileges/permissions.
executable file vs in-memory process
- Applications are stored on disk drives as files or "bunch-of-instructions and initial data".

- When the kernel executes and application it will read the executable file, load
  the "bunch-of-instructions" into RAM memory, setup the initial data, assign
  restricted privileges and finally allow the program-in-memory to be executed by
  any free-available CPU on the system.
Basic file permissions:
Standar file permissions allows to assign different access permissions to
the owner of the file, the group owner of the file and anyone else.

$ ls -l myFileOfInterest.db

-rw?-r-?---? john accountDepartment ....  myFileOfInterest.db
└┼┘│└┼┘│└┼┘│  └┬─┘ └──────┬────────┘
 │ │ │ │ │ │   │          │
 │ │ │ │ │ │   │          └─ group owner
 │ │ │ │ │ │   └──────────── user  owner
 │ │ │ │ │ │
 │ │ │ │ │ └──────────────── sticky bit (hidden if not set)
 │ │ │ │ │
 │ │ │ │ └────────────────── permissions allowed to others: read           access
 │ │ │ │
 │ │ │ └──────────────────── SUID bit   (hidden if not set)
 │ │ │
 │ │ └────────────────────── permissions allowed to group : read           access
 │ │
 │ └──────────────────────── SUID bit   (hidden if not set)
 │
 └────────────────────────── permissions allowed to user  : read&write access

Previous line can be read as:
"""Allow read and write permissions to file-owner "john",
       read permissions to group-owner "accountDepartment"
   and no   permissions to anyone-else """

            ┌──────┬─────────────────────────────┬─────────────────────────────┐
Permissions │Symbol│      FILE                   │     DIRECTORY               │
┌───────────┼──────┼─────────────────────────────┼─────────────────────────────┤
│       read│  r   │ Allows to read the content  │ Allows to list the files in │
│           │      │ of the file                 │ and file-attributes in the  │
│           │      │                             │ directory                   │
├───────────┼──────┼─────────────────────────────┼─────────────────────────────┤
│      write│  w   │ Allows to write, modify,    │ Allows to add and delete    │
│           │      │ append or delete the file   │ files into the directory and│
│           │      │ content.                    │ modify metadata (access     │
│           │      │                             │ or modification time, ...)  │
├───────────┼──────┼─────────────────────────────┼─────────────────────────────┤
│    execute│  x   │ Allows to execute the       │ Allows to enter into the    │
│           │      │ program or script           │ directory                   │
├───────────┼──────┼─────────────────────────────┴─────────────────────────────┤
│    sticky │  T   │ Only the person that created the file/dir. can change it, │
│           │      │ even if other people have write permissions to file/dir.  │
│           │      │ turn on: $ chmod +t someFileOrDir                         │
│           │      │ Normal /tmp (temporary user files) is an sticky directory │
├───────────┼──────┼───────────────────────────────────────────────────────────┤
│       suid│  S   │ Allow SUID/SGID (switch user/group ID). When executed it  │
│           │      │ it will be executed with creator/group of file, instead of│
│           │      │ current user.                                             │
│           │      │ turn on: $ chmod +s someFileOrDir                         │
└───────────┴──────┴───────────────────────────────────────────────────────────┘
NTP-Clock how-to
   # REF:@[https://www.server-world.info/en/note?os=Fedora_25&p=ntp&f=2]
01 $ sudo dnf -y install ntp     # ← Install ntp client
02 $ vim /etc/ntp.conf
   → # line 18: add the network range you allow to receive requests
   →
   → restrict 10.0.0.0 mask 255.255.255.0 nomodify notrap
   →   server 0.fedora.pool.ntp.org iburst
   → # server 1.fedora.pool.ntp.org iburst
   → # server 2.fedora.pool.ntp.org iburst
   → # server 3.fedora.pool.ntp.org iburst
   → # server ntp.nict.jp iburst
   → # server ntp1.jst.mfeed.ad.jp iburst
   → # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   → # Enable/disable NTP servers "at will"
03 $ sudo systemctl start ntpd
04 $ sudo systemctl enable ntpd
05 $ ntpq -p  # check

   # REF:@[https://www.digitalocean.com/community/tutorials/how-to-set-up-timezone-and-ntp-synchronization-on-ubuntu-14-04-quickstart]
06 $ sudo timedatectl set-timezone Europe/London
User Mng
Create new user
$ useradd [options] LOGIN : creates new user with default+specified values

$ useradd -D  #  display default values
(ex.output)
  GROUP=100
  HOME=/home
  INACTIVE=-1
  EXPIRE=
  SHELL=/bin/bash
  SKEL=/etc/skel
  CREATE_MAIL_SPOOL=yes

$ useradd -D [options] # update default values
----- next opts have defaults if not indicated ----------
  --base-dir BASE_DIR  :  (default to /home) Ignored if --home-dir set
  --expiredate EXPIRE_DATE
  --inactive INACTIVE  :  day # after pass.expiration before disabling
  --gid GROUP: existing group name or ID for initial group (when --no-user-group used)
  --shell SHELL
--------------------------------------------------------
  --groups group1,group2,... supplementary groups
  --skel SKEL_DIR :  skel. dir. to be copied in the user's home directory
  --key KEY=VALUE : Overrides /etc/login.defs defaults
                    (UID_MIN, UID_MAX, UMASK, PASS_MAX_DAYS and others).
                    Example: -K PASS_MAX_DAYS=-1 can be used when creating
                      system account to turn off password ageing, even though
                      system account has no password at all.
  --no-log-init   : Do not add user to lastlog and faillog databases
  --create-home   : Create the user's home directory if it does not exist.
                    By default no home directories are created
  --no-create-home: Do not create the user's home directory if enabled in defaults
  --no-user-group : Do not create a group. Initial group indicated by --gid
  --non-unique    : Allow duplicate (non-unique) existing UID in --uid
  --password PASS : (disabled by default)
  --system        : Create system account (no aging, uid chosen in
                     SYS_UID_MIN-SYS_UID_MAX range)
  --root CHROOTDIR: Apply changes in chrooted directory
  --uid UID       : numerical value for user's ID.
  --user-group    : Create group with the same name as user, and use as initial group
  --selinux-user SEUSER : SELinux user for the user's login
su/sudo: Switch user
- su and sudo are mostly used to allow temporal root/superuser access
  to standard users for administrative tasks like installing new applications,
  re-configuring the network, ...

- sudo is ussually considered safer than su. Ubuntu was the first
  distribution to allow sudo-only. Others distributions are also
  changing to sudo-only as time passes.

- sudo offers also a plugable architecture not offered by su
  to provide different authentication and audit mechanisms.
  REF:
  - Sudo Home page
  - sudo Third-party plugins.

man 1 su
man 8 sudo
Ex. ussage:
$ sudo vim /etc/passwd # edit /etc/passwd as root
 $ su  # Change to root user
 #

ACLs
@[https://wiki.archlinux.org/index.php/Access_Control_Lists]
Job/Process control
Scheduling
Tasks
ºcronº
ºatº
ºanacronº
basic process monit/control
$ ºpsº   ← shows list of the processes running. i
         Without options: processes belonging to current user&with a controlling terminal
  Ex. options include:
   -a: all processes from all users
   -u: add user names, %cpu usage, and %mem usage,...
   -x: add also processes without controlling terminals
   -l: add information including UID and nice value
   --forest: show process hierarchy.

$ ºpstreeº ← show parent/children process tree (-p flag show pid)

$ ºtop -n 1º ← Display top by cpu processes once and finish
$ ºtopº      ← real-time display processes ordered by memory/CPU/...(as in CPU usage)

  Z,B,E,e   Global: 'Z' colors; 'B' bold; 'E'/'e' summary/task memory scale
  l,t,m     Toggle Summary: 'l' load avg; 't' task/cpu stats; 'm' memory info
  0,1,2,3,I Toggle: '0' zeros; '1/2/3' cpus or numa node views; 'I' Irix mode
  f,F,X     Fields: 'f'/'F' add/remove/order/sort; 'X' increase fixed-width

  L,⅋,<,> . Locate: 'L'/'⅋' find/again; Move sort column: '<'/'>' left/right
  R,H,V,J . Toggle: 'R' Sort; 'H' Threads; 'V' Forest view; 'J' Num justify
  c,i,S,j . Toggle: 'c' Cmd name/line; 'i' Idle; 'S' Time; 'j' Str justify
  x,y     . Toggle highlights: 'x' sort field; 'y' running tasks
  z,b     . Toggle: 'z' color/mono; 'b' bold/reverse (only if 'x' or 'y')
  u,U,o,O . Filter by: 'u'/'U' effective/any user; 'o'/'O' other criteria
  n,#,^O  . Set: 'n'/'#' max tasks displayed; Show: Ctrl+'O' other filter(s)
  C,...   . Toggle scroll coordinates msg for: up,down,left,right,home,end

  k,r       Manipulate tasks: 'k' kill; 'r' renice
  d or s    Set update interval
  W,Y       Write configuration file 'W'; Inspect other output 'Y'
  q         Quit

$ ºiotopº    # ← Simple top-like I/O monitor
@[https://linux.die.net/man/1/iotop]

$ ºkill -lº   ← Display existing signals (Default to SIGTERM that most of the times
              will just terminate the process "cleanely")
→  1) SIGHUP   2) SIGINT   3) SIGQUIT  4) SIGILL   5) SIGTRAP
→  6) SIGABRT  7) SIGBUS   8) SIGFPE   9) SIGKILL 10) SIGUSR1
→ 11) SIGSEGV 12) SIGUSR2 13) SIGPIPE 14) SIGALRM 15) SIGTERM
→ 16) SIGSTKFLT   17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP
→ 21) SIGTTIN 22) SIGTTOU 23) SIGURG  24) SIGXCPU 25) SIGXFSZ
→ 26) SIGVTALRM   27) SIGPROF 28) SIGWINCH    29) SIGIO   30) SIGPWR
→ 31) SIGSYS  34) SIGRTMIN    35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3
→ 38) SIGRTMIN+4  39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
→ 43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12 47) SIGRTMIN+13
→ 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14 51) SIGRTMAX-13 52) SIGRTMAX-12
→ 53) SIGRTMAX-11 54) SIGRTMAX-10 55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7
→ 58) SIGRTMAX-6  59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
→ 63) SIGRTMAX-1  64) SIGRTMAX

$ ºkill [ -s (signal name)] 'process_id'º ← Send signal to process. kill -9 kills unconditionally
$ killall "process_name"  ← send signal to all processes matching full name
$ pkill "process_name"    ← send signal to all processes matching part of the name
$ skill  ← send a particular signal to command/username/tty.
        -L --- list the various signals that can be sent
        -u --- specify a username;
        -p --- process id (followed by the process id)
        -c --- command name (this is the same as killall)
        -t --- (tty number)
        -v --- verbose mode
        -i --- interactive mode.

PAUSE AND CONTINUE A PROCESS:
$ ºkill -STOP "pid"º # Pauses
$ ºkill -CONT "pid"º # Continues

$ ºnice -20 makeº ← Sets make priority to -20
                  -20 is maximum priority   (negative only allowed to root)
                   20 is the minimum priority.
$ ºrenice 10 "pid"º ← Changes priority of running process.
Job control++
nice
STOP/CONT processes
threads
chroot
cgroups
...
GNU Parallel
man 1 parallel
REF 1
REF 2

- Replacement for xargs and for-loops.
- It can also split a file or a stream into blocks and pass those to commands running in parallel.

Ex:

$ parallel --jobs 200% gzip ::: º.html  # ← Compress all º.html files in parallel
                                            200% → 2 per CPU thread

$ parallel lame {} -o {.}.mp3 ::: º.wav # ← Convert all º.wav to *.mp3 using lame

$ cat bigfile | \                       # ← Chop bigfile into 1MB blocks and grep
  parallel --pipe grep foobar               for the string foobar

ºINPUT SOURCESº
$ parallel echo ::: cmd line input source
$ cat input_from_stdin | parallel echo
$ parallel echo ::: multiple input sources ::: with values
$ parallel -a input_from_file echo
$ parallel echo :::: input_from_file
$ parallel echo :::: input_from_file ::: and command line

ºReplacement stringº
{}                    ← mydir/mysubdir/myfile.myext
{.}                   ← mydir/mysubdir/myfile
{/}, {//}, {/.}       ← myfile.myext, mydir/mysubdir, myfile
{#}                   ← The sequence number of the job
{%}                   ← The job slot number
{2}                   ← Value from the second input source
{2.} {2/} {2//} {2/.} ← Combination of {2} and {.} {/} {//} {/.}
{= perl expression =} ← Change $_ with perl expression

$ parallel --keep-order "sleep {}; echo {}" ::: 5 4 3 2 1 # ← ºKeep input order in outputº

ºControl the executionº
$ parallel --jobs 2 "sleep {}; echo {}" ::: 5 4 3 2 1  # ← Run 2 jobs in parallel

$ parallel --dryrun echo ::: Red Green Blue ::: S M L  # ← Dry-run. See will be executed
                                                           without real execution

ºRemote executionº
$ parallel -S server1 -S server2 "hostname; echo {}" ::: foo bar

ºPipe modeº
cat bigfile | parallel --pipe wc -l

$ parallel -a bigfile --pipepart --block -1 grep foobar # ← Chop bigfile into one block per CPU
                                                            thread and grep for foobar
Network
NetworkManager
@[https://www.redhat.com/sysadmin/becoming-friends-networkmanager]
@[https://linux.die.net/man/8/networkmanager]
@[https://linux.die.net/man/5/networkmanager.conf]
@[https://linux.die.net/man/1/nmcli]
@[https://linux.die.net/man/8/networkmanager_selinux]


- widespread network configuration daemon
- Managed through cli (nmcli), text-GUI (nmtui) GUI (GNOME,...)
  files, web-console (Cockpit) or D-Bus interface.
  APIs and a library (libnm) is also provided.
-ºNetworkManager allows users and applications to retrieve º
 ºand modify the network's configuration at the same time, º
 ºensuring a consistent and up-to-date view of the network.º

- NetworkManager philosophy:
  "...attempts to make networking configuration and operation as
    painless and automatic as possible..."
  When there is partial or no configuration, NetworkManager checks
  the available devices and tries its best to provide connectivity
  to the host.

- NetworkManager allows advanced network administrators to
  provide their own configuration.

ºNetworkManager Entitiesº
-ºdeviceº    : represents a network interface ("ip link")
               A NetworkManager devices tracks:
             - If it is managed by NetworkManager
             - The available connections for the device
             - The connection active on the device (if any)
-ºconnectionº: represents the full configuration to
               be applied on a device and is just a list of
               properties.
               Properties belonging to the same configuration
               area are grouped into settings:
               Example:
             - ipv4 setting group:
               - addresses
               - gateway
               - routes

^^^^^^^^^^^^^^
BºNETWORK SETUP == activate a connection with a deviceº

$ nmcli device   # ← list the devices detected by NetworkManager
(output will be similar to)
→ DEVICE   TYPE      STATE           CONNECTION
→ enp1s0   ethernet  connected       ether-enp1s0
→ enp7s0   ethernet  disconnected    --
→ lo       loopback  unmanaged  --

$ nmcli device \        # ← turn off management
  set enp1s0 managed no     for enp1s0 device
                            (change is not persisted
                             and ignored on reboot)

$ nmcli   # List detailed connections for devices
enp1s0: connected to enp1s0
      "Red Hat Virtio"
      ethernet (virtio_net), 52:54:00:XX:XX:XX, hw, mtu 1500
      ip4 default
      inet4 192.168.122.225/24
      route4 0.0.0.0/0
      route4 192.168.122.0/24
      inet6 fe80::4923:6a4f:da44:6a1c/64
      route6 fe80::/64
      route6 ff00::/8
...


$ nmcli connection # ← list the available connections
→ NAME         UUID          TYPE       DEVICE
→ ether-enp1s0 23e0d89e-...  ethernet   enp1s0
→ ...


, i.e., to deconfigure the associated device, just instruct NetworkManager to put the connection down. For instance, to deactivate the ether-enp1s0 connection:

$ nmcli connection \  # ← deactivate connection
    down ether-enp1s0     (deconfigure associated
                          device)

$ nmcli connection \  # ← Reactivate
    up ether-enp1s0

$ nmcli connection \  # ← Show connection details
  show ether-enp1s0
º(man nm-settigs for full info about available parameters)º

connection.id:                 Bºether-enp1s0º   ← human readable name
connection.uuid:                 23e0d89e-...
connection.stable-id:            --
connection.type:                 802-3-ethernet ← ethernet, wifi, bond!!, vpn, ...
connection.interface-name:       enp1s0         ← binds(restrict) to specific device
connection.autoconnect:          yes
...
ipv4.method                      auto           ← one of:
                                                  auto(DHCP)
                                                  manual(static IP in ipv4.addresses),
                                                  disabled, link local, shared
ipv4.addresses                   192.168.1.201/24
ipv4....
dhcp4.option[1]                  broadcast_address = 192.168.1.255
dhcp4....
[...]

$ nmcli connection \               ← Permanently change the connection
    modify Bºether-enp1s0º \
    ipv4.method manual
    ipv4.addresses 10.10.10.1/24 \
    ipv4.gateway 10.10.10.254 \
    ipv4.dns 10.10.10.254
$ nmcli connection up ether-enp1s0 ← New settings will only be applied
                                     on connection (re)activation

$ nmcli connection \            # ← avoid activation by NetworkManager
    modify ether-ens1s0 \           (you would have to activate manually)
    connection.autoconnect no

$ nmcli con add \            ← Create new connection
  type ethernet \              - DHCP will be used if no IPv4 config
  ifname enp0s1 \                is provided.
  con-name Oºenp0s1_dhcpº \      (ipv4.method defaults to auto)
  autoconnect no               - Run interactively with '--ask' option
$ nmcli con                  ← Verify new connection
→  NAME         UUID         TYPE     DEVICE
→  ...
→Oºenp0s1_dhcpº 64b499cb-... ethernet --
→  ...

$ nmcli con edit   ← Interactive editor-mode with inline help will open


TODO:
- "Many features deserve separate blog posts"
  - dispatcher scripts
  - connectivity checkers
  - split DNS
  - MAC address randomization
  - hotspot configuration
  - automatic configuration.

ºCheck Network statusº @[https://linux.die.net/man/1/nm-online]
~:$ nm-online -q   ←     --timeout=10       (Defaults to 30secs)
~:$ echo $?              --exit             Exit immediately if nm is not running or connecting
→ 0                      --quiet            Don't print anything
                         --wait-for-startup Wait for nm startup instead of a connection

Troubleshooting ºbypass network managerº by manually launching dhcp client: $ sudo dhclient eth0 # execute 'ip link' to show └┬─┘ │ all network devices └─────┘ RºERRORº: Network disconnects after a few seconds: Try disabling ModemManager service: $ sudo systemctl stop ModemManager.service $ sudo systemctl disable ModemManager.service
basic network audit/control
ºSocket Statistics (ss)º
man 8 ss  (ss: socket statistics)

ss USAGE EXAMPLES

º$ sudo ss -ntlpº
^^^^^^^^^^^^^^^
Display TCP (-t) ports listening (-l) for remote
request and show also the process that
opened the port (-p): -p requires sudo-permissions
-n: Do not reverse-resolve IPs to DNS names
(example output)
→ State      Recv-Q Send-Q  Local Address:Port   Peer Address:Port
→ LISTEN     0      128     º:80                 º:*                users:(("lighttpd",pid=23515,fd=4))
→ LISTEN     0      128     º:22                 º:*                users:(("sshd",pid=571,fd=3))

º$ sudo ss -t -a -p º
^^^^^^^^^^^^^^^^^^
Display all (-a) non-listening (no -l provided)
TCP (-t) sockets and processes using them (-p)
→ STATE       ... ADDRESS:PORT              PEER ADDRESS:PORT
→ ESTABLISHED ... 127.0.0.1:postgres        127.0.0.1:46404        users:(("postgres",pid=64032,fd=11))
→ ESTABLISHED ... 10.0.0.5:idonix-metanet   81.61.178.46:51003     users:(("sshd",pid=61411,fd=3),("sshd",pid=61407,fd=3))
→ ESTABLISHED ... 127.0.0.1:37200           127.0.0.1:50004        users:(("sshd",pid=61411,fd=10))
→ ESTABLISHED ... 127.0.0.1:postgres        127.0.0.1:45086        users:(("postgres",pid=43553,fd=11))
→ TIME_WAIT   ...

º$ ss -o state established '( dport = :ssh or sport = :ssh )'º
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Display all established ssh connections.

º$ ss -x src /tmp/.X11-unix/*º
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Find all local processes connected to X server.


º$ ss -o state fin-wait-2 '( sport =  :http  or  sport  =  :https  )'  dst 193.233.7/24º
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
List  all  the tcp sockets in state FIN-WAIT-1 for our apache to
network 193.233.7/24 and look at their timers.


ºDISPLAY IP Routing Tableº
$ ip route list
→ ºdefault via 10.0.0.1 dev eth0º
→ 10.0.0.0/24     dev eth0    proto kernel scope link src 10.0.0.5
→ 169.254.0.0/16  dev eth0                 scope link metric 1002
→ 172.17.0.0/16   dev docker0 proto kernel scope link src 172.17.0.1
→ 168.63.129.16   via 10.0.0.1 dev eth0 proto static
→ 169.254.169.254 via 10.0.0.1 dev eth0 proto static
→ ...

ºDISPLAY IP Routing Tableº
1: lo:     ˂LOOPBACK,           UP,LOWER_UP˃ ...  state UNKNOWN mode DEFAULT ...
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0:   ˂BROADCAST,MULTICAST,UP,LOWER_UP˃ ...  state UP      mode DEFAULT ...
    link/ether    00:0d:3a:26:bb:2b brd ff:ff:ff:ff:ff:ff
...

ºSNIFF network trafficº
$ sudo tcpdump -i eth0 port 8090  -n -A  ←  "snif" IP traffic  in network interface eth0
                                        with IP packets to/from port 8090
                                        -A: Show in text/ASCII format
                                        -n: Do not convert IPs to host-names
                                            (Avoiding slow dns reverse lookups)

ºShow IP route from source to destination ("hop" path)º
$ sudo traceroute "destination_IP_or_host" ← (attempts to ) show the route of a packet.

ºExamine open ports in remote machineº
$ sudo nmap -nt remote_machine    ← (try to) query remote machine for open ports
basic network traffic shaping
ºNetwork interface shaping with Wondershaperº
@[https://github.com/magnific0/wondershaper/]
$ sudo ./wondershaper -a eth0 -u 4096 -d 8192 ← limit upload: 4Mbps, download:8Mbs

ºProcess network shaping with Firejailº
@[https://www.pcsuggest.com/bandwidth-traffic-shaping-in-linux-with-firejail/]
$ firejail  --net=enp2s0 firefox [/bash]
$ firejail --list | grep 'firefox' | awk -F: '{print$1}'
$ firejail  --bandwidth=PID set interface-name down-speed up-speed
Internet Utility commands
$ ºhost (ip_address|domain_name)º  ← Performs lookup of an internet address (using the Domain Name System, DNS). Simply type:
or
ºº
$ ºdigº    www.amazon.com  ←        query to DNS
$ dig -x 10.10.10.10     ← revers query to DNS
(check man page for more options)

$ ºwgetº www.myDomain.com/myPage  ← HTTP client
    Options:
    -m: archive/"m"irrow single web-site
    -nc: (no clobber) avoid overwriting local files

$   wget --spider \    ← parse bookmarks.html for links
    --force-html \
    -i bookmarks.html
(see man page for more info)

$ ºcurlº  ← Script oriented HTTP client.
          It can access dictionary servers (dict),
          ldap servers, ftp, http, gopher, ...

$ curl -M : To access the full/huge manual
$ curl -u username:password http://www.placetodownload/file
Nethogs bandwidth per ps
Nethogs is a command line utility for linux that displays the network
 bandwidth used by each application or process in realtime. It is useful
 in situations when a certain process uses up too much of the bandwidth
and needs to be caught

$ sudo nethogs
...
  PID USER     PROGRAM                      DEV        SENT      RECEIVED
2367  enlighten/opt/google/chrome/chrome    eth0       3.341      20.948 KB/sec
2196  enlighten/usr/lib/firefox-7.0.1/fire  eth0       0.871       0.422 KB/sec
3723  enlighten/usr/bin/pidgin              eth0       0.028       0.098 KB/sec
2206  enlighten/usr/bin/skype               eth0       0.033       0.025 KB/sec
2380  enlighten/usr/lib/chromium-browser/c  eth0       0.000       0.000 KB/sec
0     root     unknown TCP                             0.000       0.000 KB/sec

  TOTAL                                                4.274      21.493 KB/sec"

TODO:
@[https://www.binarytides.com/linux-commands-monitor-network/]
Remote access
ssh (text console)
- ssh protocol is the
  standard way to access Linux remotely.
  A running "sshd" (ssh daemon) must be instaled and running
  on the remote machine.

ºQuickly connect to remote machine runnin sshdº
  (Remote machine must allow passwords login)
  $ ssh myUser@myRemoteMachine
 → myUser@mYRemoteMachine's password:
  (enter password to log-in to text terminal)

ºAdvanced connection optionsº
Using the ~/.ssh/config  is the recommended option for all
by the simplest scenarios. the ~/.ssh/config file allows
for complex tunning allowing for tcp tunneling, proxy-by-pass,
...
Example ~/.ssh/config
01 Host remoteHostAlias1 remoteHostAlias2 ... remoteHostAliasN
02    HostName 10.230.11.10
03    ProxyCommand /usr/bin/corkscrew 10.10.10.10 8080 %h %p
04    Port 12345
05    User myRemoteUser
06    LocalForward   5555 localhost:3333
07    RemoteForward 13389 localhost:3389
08    TCPKeepAlive true

Line 01 defines different aliases that can be used to refer
to the remote machine.
Line 02 defines the real hostname or IP of the remote sshd server.
Line 03 is an example of a command that can be used to
        bypass local firewalls using our company HTTP proxy
Line 04 allows to connect to a non default ssh-server port
        (It's always recomended NOT to use the default port 22)
Line 05 Indicates our remote user id (needed if it's different
        to our local one).
Line 06 forwards any local IP request to our local port 5555 to
        the remote host and port (localhost:3333) accesible
        in the remote machine     ^^^^^^^^^
                                  localhost as seen by the
                                  remote machine. That is
                                  the remote machine itself

        Ex: If the remote machine has a web server configured
        to just listen connection to port 3333 locally, the
        the previous option allows to also connect to the
        server once the ssh authentication has worked properly.
        The remote web server will see a local-connection from
        its local sshd server on the remote machine.
        The local ssh client and remote ssh server will forward
        any local request to port 5555 to the port 3333 on the
        remote machine.
Line 07 Any program running on the remote machine doing requests
        to 13389 will be forwarded to our localhost:3389 port
        in our local machine.
        In this example localhost:3389 is the address of the
        local Windows Remote Desktop service. This allows to
        connect remotely to our remote desktop by connecting
        to port 13389 in the remote server
        (once the ssh client has authenticated properly to the
        sshd server).

ºPasswordless authenticationº
- Useful to execute remote task automatically.
# STEP 01: generate local private secret (key) and associated public key.
$ ssh-keygen
(WARN: leave passphrase blank to allow for automated task)


# STEP 02: Copy associated public key to remote machine
#          at  /home/myRemoteUser/.ssh/authorized-keys
$ ssh-copy-id myRemoteUser@myRemoteMachine

Now it must be possible to ssh into the remote machine with no password
$ ssh myRemoteUser@myRemoteMachine

Troubleshooting passwordless access:
local  $ chmod go-rw ~/.ssh/*  # Fix permission. ssh is paranoid about it.
remote $ chmod go-rw ~/.ssh/*  # Fix permission. ssh is paranoid about it.


See man 1 ssh and
    man 8 sshd
for full list options. SSH, being the core security method to access
Linux server has a lot of simple and advanced options and different
configurations use different (paranoid) security options.

See also:
@[https://www.reddit.com/r/linuxadmin/comments/b9c7lw/using_a_yubikey_as_smartcard_for_ssh_public_key/]
VNC
Remote
Desktop
- The VNC protocol allows to launch a graphical system in a
  remote system using the VNC server and access it remotely
  using the VNC clients.

- Running programs in the remote machine will "draw" to a
  local memory buffer shared by the VNC server.
  When a remote VNC client connects, the VNC server will
  transmit to the client the graphic buffer and finally
  the VNC client will show the buffer in the local display.
- Modern VNC client←→server protocols are highly optimized
  to save bandwith and allows for high-resolution displays
  with around 1Megabit of bandwidth.

- There is no limit to the number of remote VNC servers that
  can run in parallel, just limited by memory access.
  This github repository offers an example of how to run many different
  VNC servers in parallel.
Knockd ("Invisible Linux")
REF(Michael Aboagye @ MakeTechEasier.com)
UUID: 96493a84-bac4-49a0-b553-a8fee7310c1c

- port-knock server:
  - It listens to all traffic on an network interface
    waiting for special sequences of port-hits.

- clients (telnet, socat, ...) initiate port-hits by
  sending a TCP or packet to a port on the server.

ºPRE-SETUPº
- Install and Configure Iptables
  $ sudo apt-get install iptables iptables-persistent
                                  ^^^^^^^^^^^^^^^^^^^
                                  takes over automatic
                                  loading of saved tables


º Knockd Installº
$ sudo apt-get install knockd  # apt like
$ sudo dnf     install knockd  # rpm like

ºhide ssh service until "Knocked"º
$ iptables -A INPUT \
        -m conntrack \                        STEP 1
        --ctstate ESTABLISHED, RELATED \    ← Allow established/current
        -j  ACCEPT                          ← Allow

$ iptables  -A  INPUT \                       STEP 2
        -p tcp  --dport  22 \               ← block incomming con. to 22 (SSH)
        -j  REJECT

                                              STEP 3
$ netfilter-persistent save                 ← s save the firewall rules
$ netfilter-persistent reload


Configure  Knockd
$ sudo "vim" /etc/knockd.conf
  | [options]
  |   UseSyslog
  |
  | [openSSH]
  |   sequence = 7000,8000,9000
  |   seq_timeout = 5
  |   command = /sbin/iptables -A INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
  |   tcpflags = syn
  |
  | [closeSSH]
  |   sequence = 8000,9000,7000
  |   seq_timeout = 5
  |   command = /sbin/iptables -D INPUT -s %IP% -p tcp --dport 22 -j ACCEPT
  |   tcpflags = syn

- command will be executed once the client-sequence  is recognised.
- tcpflags must be set on the client "knocks"

NOTE: Adding iptables "-A" flag (appned) causes the rule to be appended
to the end of th INPUT chain, causing all remaining connections to drop.
Replace by:
  command = /sbin/iptables -I INPUT 1 -s %IP% -p tcp --dport 22 -j ACCEPT

  "-I" flag (insert) ensures that the new rule is added to the top of the
  input chain to accept ssh connections.

ºEnable Knockd Serviceº
- Add/edit START_KNOCKD option in /etc/default/knock lo look like:
  START_KNOCKD=1
$ sudo systemctl enable knockd
$ sudo systemctl start knockd

ºTestingº

$ knock -v my-server-ip  7000 8000 9000
$ ssh my-server-ip
...
$ knock -v my-server-ip 9000 8000 7000
System Info
Basic info
$ ºuptimeº  ← shows how long the computer has been "up" since last reboot.
           number of users and the processor load
$ ºdateº    ← current date/time
$ ºcalº     ← display calendar
$ ºunameº   ← print information on the system such as OS type, kernel version...
    -a --- print all the available information
    -m --- print only information related to the machine itself
    -n --- print only the machine hostname
    -r --- print the release number of the current kernel
    -s --- print the operating system name
    -p --- print the processor type

$ ººcat /etc/*release* | sort | uniq ← Shows OS identification
                                       (Distribution, major, minor,patch version, flavour, ...)
$ ºfree -gº  ← human-readable memory report (-g: Gigabytes)
              total        used        free      shared  buff/cache   available
Mem:        6102476      812244     4090752       13112     1199480     4984140
Swap:       2097148           0     2097148
$ ºgetconf -agº  ← Get all system config. parameters
→ ...
→ PAGESIZE                           4096
→ ..
→ ULONG_MAX                          18446744073709551615
→ USHRT_MAX                          65535
→ ...
→ _POSIX_...
→ ...
→ LFS_CFLAGS
→ LFS_LDFLAGS
→ LFS_LIBS
→ LFS_LINTFLAGS
→ LFS64_CFLAGS                       -D_LARGEFILE64_SOURCE
→ LFS64_LDFLAGS
→ LFS64_LIBS
→ LFS64_LINTFLAGS                    -D_LARGEFILE64_SOURCE
→ ...
→ GNU_LIBC_VERSION                   glibc 2.28
→ GNU_LIBPTHREAD_VERSION             NPTL 2.28
→ POSIX2_SYMLINKS                    1
→ LEVEL1_ICACHE_SIZE                 32768
→ LEVEL1_ICACHE_ASSOC                8
→ LEVEL2_CACHE_SIZE                  262144
→ LEVEL2_CACHE_ASSOC                 8
→ LEVEL2_CACHE_LINESIZE              64
→ LEVEL3_CACHE_SIZE                  3145728
→ LEVEL3_CACHE_ASSOC                 12
→ LEVEL3_CACHE_LINESIZE              64
→ LEVEL4_CACHE_SIZE                  0
→ LEVEL4_CACHE_ASSOC                 0
→ LEVEL4_CACHE_LINESIZE              0
→ ...



ºdmidecode typesº
 0   BIOS                 11   OEM Strings                       22   Portable Battery          33   64-bit Memory Error
 1   System               12   System Configuration Options      23   System Reset              34   Management Device
 2   Baseboard            13   BIOS Language                     24   Hardware Security         35   Management Device Component
 3   Chassis              14   Group Associations                25   System Power Controls     36   Management Device Threshold Data
 4   Processor            15   System Event Log                  26   Voltage Probe             37   Memory Channel
 5   Memory Controller    16   Physical Memory Array             27   Cooling Device            38   IPMI Device
 6   Memory Module        17   Memory Device                     28   Temperature Probe         39   Power Supply
 7   Cache                18   32-bit Memory Error               29   Electrical Current Probe  40   Additional Information
 8   Port Connector       19   Memory Array Mapped Address       30   Out-of-band Remote Access 41   Onboard Devices Extended Information
 9   System Slots         20   Memory Device Mapped Address      31   Boot Integrity Services   42   Management Controller Host Interface
10   On Board Devices     21   Built-in Pointing Device          32   System Boot

Show physical mem.banks
$ sudo dmidecode -t 17
| OUPUT PHYSICAL MACHINE                      | OUPUT VIRTUAL MACHINE (Manufacturer: QUEMU,...)
| # dmidecode 3.2                             | # dmidecode 3.0
| Getting SMBIOS data from sysfs.             | Getting SMBIOS data from sysfs.
| SMBIOS 2.7 present.                         | SMBIOS 2.8 present.
|                                             |
| Handle 0x0008, DMI type 17, 34 bytes        | Handle 0x1100, DMI type 17, 40 bytes
| Memory Device                               | Memory Device
|     Array Handle: 0x0007                    |         Array Handle: 0x1000
|     Error Information Handle: Not Provided  |         Error Information Handle: Not Provided
|     Total Width: 64 bits                    |         Total Width: Unknown
|     Data Width: 64 bits                     |         Data Width: Unknown
|     Size: 8192 MB                           |         Size: 4096 MB
|     Form Factor: SODIMM                     |         Form Factor: DIMM
|     Set: None                               |         Set: None
|     Locator: ChannelA-DIMM0                 |         Locator: DIMM 0
|     Bank Locator: BANK 0                    |         Bank Locator: Not Specified
|     Type: DDR3                              |         Type: RAM
|     Type Detail: Synchronous                |         Type Detail: Other
|     Speed: 1333 MT/s                        |         Speed: Unknown
|     Manufacturer: Samsung                   |         Manufacturer: QEMU
|     Serial Number: 939BED25                 |         Serial Number: Not Specified
|     Asset Tag: None                         |         Asset Tag: Not Specified
|     Part Number: M471B1G73DB0-YK0           |         Part Number: Not Specified
|     Rank: Unknown                           |         Rank: Unknown
|     Configured Memory Speed: 1333 MT/s      |         Configured Clock Speed: Unknown
|                                             |         Minimum Voltage: Unknown
| Handle 0x0009, DMI type 17, 34 bytes        |         Maximum Voltage: Unknown
| Memory Device                               |         Configured Voltage: Unknown
|     Array Handle: 0x0007                    |
|     Error Information Handle: Not Provided  |
|     Total Width: 64 bits                    |
|     Data Width: 64 bits                     |
|    ºSize: 8192 MB            º              |
|    ºForm Factor: SODIMM      º              |
|    ºSet: None                º              |
|    ºLocator: ChannelB-DIMM0  º              |
|    ºBank Locator: BANK 2     º              |
|    ºType: DDR3               º              |
|    ºType Detail: Synchronous º              |
|    ºSpeed: 1333 MT/s         º              |
|    ºManufacturer: 04CB       º              |
|     Serial Number: A8750300                 |
|     Asset Tag: None                         |
|     Part Number:                            |
|     Rank: Unknown                           |
|     Configured Memory Speed: 1333 MT/s      |
vmstat(global stats)
man 8vmstatdisplay real-time stats of procs.,mem, paging, block IO, traps, disks and cpu activity.
Ussage:
  $ vmstat [options] [delay [count]]
The first report produced gives averages since the last reboot.
Additional reports give information on a sampling period of length delay.
The process and memory reports are instantaneous in either case.

 -a --active: Display active and inactive memory
              FROM:REF
              * active memory are pages which have been accessed "recently"
              * inactive memory are pages which have not been accessed "recently"
              """A high ratio of active to innactive memory can indicate
                 memory pressure, but that condition is usually accompanied by
                 pagin/swapping which is easier to understand"""
 -f --forks : display number of forks (fork,vfork,clone) since boot. This
 -m --slabs : Display slabinfo
              (memory assigned to kernel objects)
 -s --stats : Displays a table of various event counters and memory statis‐
            tics. (without repeating)
 -d --disk  : Report disk statistics
 -D --disk-sum: Report some summary statistics about disk activity.
 -p --partition device: Detailed statistics about partition
 -S --unit  :=  1000 (k), 1024 (K), 1000000 (m), 1048576 (M) bytes.
            swap (si/so) and block (bi/bo) not affected
 -t --timestamp: Append timestamp to each line
 º-w --wide  : Wide output modeº (Recomended)

OUTPUT FIELD DESCRIPTION
ºVM MODEº                                         | ºDISK MODE(--disk)º
Procs                                           | Reads
  r: # of runnable procs(running|waiting for)   |   total  :   Total reads completed successfully
  b: # of processes Rºin uninterruptible sleepº |   merged : grouped reads(resulting in 1 I/O)
                                                |   sectors: Sectors read successfully
Memory *1                                       |        ms: milliseconds spent reading
(inacti,cache,buff can be freed if needed)     |
  swpd  : virtual memory used                   |
  free  : idle (ready to use) memory            | Writes
  buff  : memory used as disk buffers           |     total:   Total writes completed successfully
  cache : memory used as cache                  |    merged: grouped writes (resulting in 1 I/O)
  inact : inactive memory (-a option)           |   sectors: Sectors written successfully
          (still cached for possible reuse)     |        ms: milliseconds spent writing
  active: memory Used by processes              |
                                                | IO
 Swap                                           |   cur: I/O in progress
   si: Amount of memory swapped in from disk(/s)|     s: seconds spent for I/O
   so: Amount of memory swapped to disk(/s)     +--------------------------------------------------------

IO                                              | ºDISK PARTITION MODE (--partition)º
  bi: Blocks received from block device         |           reads: Total # of reads issued to part.
  bo: Blocks     sent   to block device         |    read sectors: Total read sectors for partition
                                                |          writes: Total # of writes issued to part.
System                                          |requested writes: Total # of write requests made for part
  in: interrupts per second, including the clock+---------------------------------------------------------
  cs: Rºcontext switches per secondº
                                                | ºSLAB MODE (--slabs)º
CPU (percentages of total CPU time)             |   cache: Cache name
  us: Time spent running non-kernel code(user  )|     num: # of currently active objects
  sy: Time spent running     kernel code(system)|   total: Total # of available objects
  id: Time spent idle                           |    size: Size of each object
  wa: RºTime spent waiting for IOº              |   pages: # of pages with at least one active object
  st: Time stolen from a virtual machine        +---------------------------------------------------------

ºAll linux blocks are currently 1024 bytes.º



* 1: REF
    - buffers are associated with a specific block device,
      caching filesystem metadata(dir contents, file permissions),
      as tracking in-flight pages (what's being written
      from or read to for a particular block device).
      The Kernel tries to cache just enough buffers for predicted 
      "next-reads" to block devices. 
        
    - cache contains real application file data (file content). 
      The Kernel tries to cache as much as possible until there is
      no more free memory for apps.
Accurate mem.use
REF

PIP INSTALL:
$ sudo pip install ps_mem

USSAGE:
ps_mem [-h|--help] [-p PID,...] [-s|--split-args] [-t|--total] [-w N]
       [-d|--discriminate-by-pid] [-S|--swap]

Ex 1:
  $ sudo ps_mem
  →  Private  +   Shared  =  RAM used       Program
  →
  →  34.6 MiB +   1.0 MiB =  35.7 MiB       gnome-terminal
  → 139.8 MiB +   2.3 MiB = 142.1 MiB       firefox
  → 291.8 MiB +   2.5 MiB = 294.3 MiB       gnome-shell
  → 272.2 MiB +  43.9 MiB = 316.1 MiB       chrome (12)
  → 913.9 MiB +   3.2 MiB = 917.1 MiB       thunderbird
  → ---------------------------------
  →                           1.9 GiB
  → =================================
  →

Ex 2: Show only ps_mem for current $USER:
  ~ sudo ps_mem -p $(pgrep -d, -u $USER)
  → ...

Ex 3: Summarize total RAM usage per user:

  for i in $(ps -e -o user= | sort | uniq); do
    printf '%-20s%10s\n' $i $(sudo ps_mem --total -p $(pgrep -d, -u $i))
  done
SysV shared memory
- SysV shared memory segments are accounted as a cache, though they
  do not represent any data on the disks.

To check the size of shared memory segments:
#  ipcs -m  # command and checking the bytes column.
BACKUPS
tar (tape-archive)
man 1 tar
- standard tool for archiving saving a (many) files and directories to a single
  tape or disk archive.
- Individual files can be restored from the .tar file when needed.

Examples:
ºCreate archive file:º
$ tar czf _home_myUser_myProject_01.tar.gz /home/myUser/myProject
      ↑↑↑ ^^^^^^^^^^^^^                    ^^^^^^^^^^^^^^^^^^^^^^
      │││ prefixing with absolute          directory to archive
      │││ path it's just an optional
      │││ convention.
      │││
      ││└── name of output file
      │└─── use gzip for compression (.gz extension)*1
      └──── compress
*1: bzip2 ('j' option instead of 'z' and '.bz2' extension instead of gz)
    is also quite common, with better compression, but also more CPU intensive.

ºRestore backup from archive file:º
$ cd /home/myUser
$ tar xzf _home_myUser_myProject_01.tar.gz
      ↑↑↑
      ││└── name of  input file
      │└─── use gzip for de-compression
      └──── de-compress

ºList contents of archive:º
$ tar tzf _home_myUser_myProject_01.tar.gz
      ↑↑↑
      ││└── name of  input file
      │└─── use gzip for de-compression
      └──── ºlistº

Remote Incremental
- EasyUp: KISS incremental remote backup around rsync+ssh
@[https://github.com/earizon/easyup]
- Rsnapshot: filesystem snapshot utility on top of rsync.
@[http://rsnapshot.org/]
  rsnapshot makes it easy to make periodic snapshots of local machines, and remote machines over ssh.
  The code makes extensive use of hard links whenever possible, to greatly reduce the disk space required
  and rsync to save bandwidth (backup only changes)
- Live backups with inotify + rsync + bash: Backup on "real-time changes"
@[https://linuxhint.com/inotofy-rsync-bash-live-backups/]
- Bacula:
@[http://www.bacula.org/]
  """Bacula is a set of Open Source, computer programs that permit to manage backup,
     recovery, and verification of computer data across a network of computers of different
     kinds,  offering many advanced storage management features that make it
     easy to find and recover lost or damaged files."""
     -@[http://www.bacula.org/9.0.x-manuals/en/main/index.html]
     ºDirector Daemonºsupervises all the backup, restore, verify and archive operations.
      Sysadmin uses Director to schedule backups and to recover files..
     ºConsole serviceºallows the administrator or user to communicate with the Director
      (three versions: text-based, QT-based, wxWidgets)
     ºFile DaemonºIt's installed on the machine to be backed up and is responsible for
      providing the file attributes and data when requested by the Director
      as well as for the file system dependent part of restoring the file attributes and data
      during a recovery operation.
     ºStorage daemonsºare software programs in charge of storage and recovery of the
      file attributes and data to the physical backup media or volumes. In other words, it is
      responsible for reading and writing your tapes (or other storage media, e.g. files)
     ºCatalog Servicesºare responsible for maintaining the file indexes and
      volume databases for all files backed up allowing sysadmin or user to
      quickly locate and restore any desired file. The Catalog services sets
      Bacula apart from simple backup programs like tar and bru, because the catalog
      maintains a record of all Volumes used, all Jobs run, and all Files saved, permitting
      efficient restoration and Volume management. Bacula currently supports three different
      databases, MySQL, and PostgreSQL one of which must be chosen when building Bacula.
     ºMonitor ServiceºAllows the administrator or user to watch current status of Directors,
      File Daemons and Bacula Storage Daemons. Currently, only a GTK+ version is available.

- Symple remote backups with ssh
  $ tar cjf - myDirToBackup \       # local
    | ssh myUser@myRemoteMachine \  # ssh pipe
    "cd myBackupPath ⅋⅋ tar -xjf -" # remote

- "Real time" backup with rsync and bash
@[https://github.com/Leo-G/backup-bash]
multi-core
compression
tools
@[https://www.linuxlinks.com/best-linux-multi-core-compression-tools/]
Zstandard
@[https://www.2daygeek.com/zstandard-a-super-faster-data-compression-tool-for-linux/]
super-fast compression tool
Audit
basic user audit
$ who     ← Displays current users logged into the system and the  logged-in time
$ w       ← Displays who is logged into the system and ºwhat they are doingº (procs. they are running).
$ users   ← Displays only user names who are currently logged in
$ last    ← Displays records of users-logged-in time, ºremote IP or PTTYº, reboot time,
$ lastlog ← Displays list of users and what day/time they logged into the system.
$ whoami  ← Tells the user who they are currently logged in as
$ ac      ← Tell how much time users are logged in.
            (sudo apt install acct, sudo dnf install psacct, ...)
            It pulls its data from the current wtmp file.
            Ex:
            $ ºacº
            → total     1261.72
            $ ºac -pº  ← total hours by user
            → shark        5.24
            → nemo         5.52
            → shs       1251.00
            → total     1261.76
            $ º$ ac -d | tail -10º  ← daily counts of how many hours users were logged in
            → Jan 11  total        0.05
            → Jan 12  total        1.36
            → Jan 13  total       16.39
            → Jan 15  total       55.33
            → Jan 16  total       38.02
            → Jan 17  total       28.51
            → Jan 19  total       48.66
            → Jan 20  total        1.37
            → Jan 22  total       23.48
            → Today   total        9.83

@[https://asciinema.org/] TODO
 """ Record and share your terminal sessions, the right way.
     Forget screen recording apps and blurry video. Enjoy a
     lightweight, purely text-based approach to terminal recording.  """

Audit framework
Video REF
ºARCHITECTURE:º                                          ºLOG "TOPICS":º
                                                          ---------------------
 audit.rules      auditd.conf                             user
     +              +                                     group
     |              |                                     Audit ID
     |              |                                     Remote Hostname
     |              |                                     Remote Host Address
     |              |  +-→ audispd                        System call
     v              v  |                                  System call arguments
  auditctl +---→Bºauditdº           +-→ Gºaureportº       File
     |              ^  |            |                     File Operations
     |              ║  +-→ /var/log/audit/audit.log       Session
     |              ║               |                     Success|Failure
     |              ║               +-→ Gºausearchº
     |              ║
     +-------+      ║
             |      ║
Running      |      ║      autrace
process      |      ║        |
    +   +----|----------+    |
    |   |    +-→Oºauditº|    |
    +--→+           ^   +←---+
        |           ║   |                  Lecture:
        | KERNEL════╝   |                  ═══ : control
        +---------------+                  --- : data-flow

  - Bºauditd daemonº centralize log writing to disk/network

  - Oºaudit module@KERNELº handles the audit rules
    (audtictl gets in charge of passing audit.rules to audit@KERNEL
    It also intercepts system calls.

  GºCLI UTILITIESº:
  |- Gºaureportº creates human-readable reports. Useful options:
  |  --summary
  |  --failed
  |  --start, --end (aureport understands 'today', 'yesterday', 'now',
  |                  'recent', 'this-week', 'this-month', 'this-year')
  |  --auth, --avc, --login, --user, --executable, --syscall
  |
  |- Gºausearchº "drills deeper" into the details of a
  |   particular event


  - autrace is the "ptrace" or "strace" for audit events
    (audit the audit)

  - Audispd (dispatcher) provides a plugin system to dispath
    to other places (remote machines, Prometheus,...)
  In RHEL 8+ it has been integrated into auditd. REF
"Full" Journey
-ºINITIAL SETUP:º
 1) Audit Daemon Configuration -→  /etc/audit/auditd.conf
    Defines how the audit system works once the daemon is running.
    Default settings ussually are right-enough
 2) Audit Rules:
  RºWARN: No default rules are setº
  r*WARN: First match wins!
    - Used to define ºwhat we are interested in auditingº
    - three basic types of audit rules:
      - Basic audit system parameters
      - File and directory watches
      - System call audits
    Ex. Audit rules
    | # basic audit system parameters
    | -D   ← Delete all previous rules (recommended)
    | -b   ← How many buffer  do we want to have
    | -f 1 ← 0: ignore failures
    |        1: syslog
    |        2: panic
    | -e 1 ← 0: disable logging
    |        1: enable
    |        2: inmutable (not even root can change without reboot)
    | # some file and dirs. watches
    | -w /home/myUser/MySecrets  -p rxwa
    | -w /sbin/auditctl          -p x
    | # Example system call rule
    | -a entry,always -S umask

 3) Audispd Daemon Configuration (now part of auditd)

ºDaily useº
Normally you find an event class of interest with aureport and then
drill down into the nitty-gritty with ausearch

$ºsudo aureport --summaryº
Summary Report
======================
Range of time in logs:    12/31/1969 19:00:00.000 - 04/20/2019 07:42:51.715
Selected time for report: 12/31/1969 19:00:00     - 04/20/2019 07:42:51.715
Number of changes in configuration             : 971
Number of changes to accounts, groups, or roles: 36
Number of logins                               : 2992
Number of failed logins                        : 497
Number of authentications                      : 3161
Number of failed authentications               : 438
Number of users                                : 8
Number of terminals                            : 30
Number of host names                           : 16
Number of executables                          : 24
Number of commands                             : 14
Number of files                                : 2
Number of AVC's                                : 3
Number of MAC events                           : 3052
Number of failed syscalls                      : 0
Number of anomaly events                       : 92
Number of responses to anomaly events          : 0
Number of crypto events                        : 52922
Number of integrity events                     : 0
Number of virt events                          : 0
Number of keys                                 : 0


$*sudo aureport --login --failed
04/15/2019 16:54:29 (unknown) 211.252.85.100 ssh /usr/sbin/sshd no 21896
04/15/2019 14:35:25 root      211.252.85.100 ssh /usr/sbin/sshd no 20413
04/07/2019 20:10:13 (unknown) 51.68.35.150   ssh /usr/sbin/sshd no 11272
04/07/2019 19:59:25 apache    51.68.35.150   ssh /usr/sbin/sshd no 10312
...

$ sudo aureport --syscall -fail
...
   date        time    syscall    pid     comm      auid   event
1. 04/09/2019  .....   87         4006    semodule  11112 º53039º

$ sudo auserach -i -a º53039º
.....

ºCREATE A PLOT OF EVENTS WITH:º
$ sudo aureport -e -i --summary | ºmkbar eventsº

ºShow relationship among users,system calls,... with mkgraph:º
$ sudo aureport -s -i | aws '/^`[0-9]/ { printf "%s %s\n", $6, $4}' | \
  sort | uniq |ºmkgraphºsyscall-vs-program


ºmanº:
man 8 auditd       man 8 ausearch
man 5 auditd.conf  man 8 aureport
man 8 auditctl     man 5 audispd.conf
man 8 autrace      man 8 audispd

ºOther documentationº:
/usr/share/doc/audit:
 Contains README with basic design information and
 sample .rules files for different scenarios:
 -   capp.rules: Controlled Access Protection Profile (CAPP)
 -   lspp.rules: Labeled Security  Protection Profile (LSPP)
 - nispom.rules: National Industrial Security Program Operating Manual Chapter 8 (NISPOM)
 -   stig.rules: Secure Technical Implementation Guide (STIG)

Tiger
Kiss-IDS
@[https://www.nongnu.org/tiger/]
(Unix/Linux)
- open source shell-scripts collection for security audit and host intrusion detection.
- Very extensible.
- It scans system configuration files, file systems, and user configuration files
  for possible security problems and reports them.

ºInstallº
- Debian/Ubuntu/Mint/...
  $ sudo apt install tiger
  (Output will be similar to)
  →┌─────────────────────────Tripwire Configuration ├────────────────────────────┐
  →│ Tripwire uses a pair of keys to sign various files, thus ensuring their ... │
  →│ ...                                                                         │
  →│                                                                             │
  →│        ˂Yes˃                                                          ˂No˃  │
  →└─────────────────────────────────────────────────────────────────────────────┘
  (Pressing Yes)
  →┌────────────────────────┤ Tripwire Configuration ├────────────────────────┐
  →│                                                                          │
  →│ Tripwire keeps its configuration in a encrypted database that is         │
  →│ generated, by default, from /etc/tripwire/twcfg.txt                      │
  →│                                                                          │
  →│ Any changes to /etc/tripwire/twcfg.txt, either as a result of a change   │
  →│ in this package or due to administrator activity, require the            │
  →│ regeneration of the encrypted database before they will take effect.     │
  →│                                                                          │
  →│ Selecting this action will result in your being prompted for the site    │
  →│ key passphrase during the post-installation process of this package.     │
  →│                                                                          │
  →│ Rebuild Tripwire configuration file?                                     │
  →│                                                                          │
  →│                                                                 │
  →│                                                                          │
  →└──────────────────────────────────────────────────────────────────────────┘
  (Press yes)
  →┌────────────────────────┤ Tripwire Configuration ├─────────────────────────┐
  →│                                                                           │
  →│ Tripwire keeps its policies on what attributes of which files should be   │
  →│ monitored in a encrypted database that is generated, by default, from     │
  →│ /etc/tripwire/twpol.txt                                                   │
  →│                                                                           │
  →│ Any changes to /etc/tripwire/twpol.txt, either as a result of a change    │
  →│ in this package or due to administrator activity, require the             │
  →│ regeneration of the encrypted database before they will take effect.      │
  →│                                                                           │
  →│ Selecting this action will result in your being prompted for the site     │
  →│ key passphrase during the post-installation process of this package.      │
  →│                                                                           │
  →│ Rebuild Tripwire policy file?                                             │
  →│                                                                           │
  →│                                                                  │
  →└───────────────────────────────────────────────────────────────────────────┘
  (Press yes)
  ...
  (enter required passphrases)
  ...
 ºThe Tripwire binaries are located in /usr/sbin and the database is located  º
 ºin /var/lib/tripwire. It is strongly advised that these locations be stored º
 ºon write-protected media (e.g. mounted RO floppy). See                      º
 */usr/share/doc/tripwire/README.Debian for details.                          *


- Other Distros
$ wget  -c  http://download.savannah.gnu.org/releases/tiger/tiger-3.2rc3.tar.gz
                                                            ^^^^^^^^^^^^^^^^^^^
                                                            latest version (2019-01)
  $ tar -xzf tiger-3.2rc3.tar.gz
  $ cd tiger-3.2/
  $ sudo ./tiger

ºtigerrc (Configuration)º
ºRunningº
  $ sudo tiger
  → Tiger UN*X security checking system
  →    Developed by Texas A⅋M University, 1994
  →    Updated by the Advanced Research Corporation, 1999-2002
  →    Further updated by Javier Fernandez-Sanguino, 2001-2015
  →    Contributions by Francisco Manuel Garcia Claramonte, 2009-2010
  →    Covered by the GNU General Public License (GPL)
  →
  → Configuring...
  →
  → Will try to check using config for '2018' running Linux 4.17.17-x86_64-linode116...
  → --CONFIG-- [con005c] Using configuration files for Linux 4.17.17-x86_64-linode116. Using
  →            configuration files for generic Linux 4.
  → Tiger security scripts ººº 3.2.3, 2008.09.10.09.30 ººº
  → 14:57→ Beginning security report for localhost.
  → 14:57→ Starting file systems scans in background...
  → 14:57→ Checking password files...
  → 14:57→ Checking group files...
  → 14:57→ Checking user accounts...
  → 14:59→ Checking .rhosts files...
  → 14:59→ Checking .netrc files...
  → 14:59→ Checking ttytab, securetty, and login configuration files...
  → 14:59→ Checking PATH settings...
  → 14:59→ Checking anonymous ftp setup...
  → 14:59→ Checking mail aliases...
  → 14:59→ Checking cron entries...
  → 14:59→ Checking 'services' configuration...
  → 14:59→ Checking NFS export entries...
  → 14:59→ Checking permissions and ownership of system files...
  → --CONFIG-- [con010c] Filesystem 'nsfs' used by 'nsfs' is not recognised as a valid filesystem
  → 14:59→ Checking for indications of break-in...
  → --CONFIG-- [con010c] Filesystem 'nsfs' used by 'nsfs' is not recognised as a valid filesystem
  → 14:59→ Performing rootkit checks...
  → 14:59→ Performing system specific checks...
  → 15:06→ Performing root directory checks...
  → 15:06→ Checking for secure backup devices...
  → 15:06→ Checking for the presence of log files...
  → 15:06→ Checking for the setting of user's umask...
  → 15:06→ Checking for listening processes...
  → 15:06→ Checking SSHD's configuration...
  → 15:06→ Checking the printers control file...
  → 15:06→ Checking ftpusers configuration...
  → 15:06→ Checking NTP configuration...
  → 15:06→ Waiting for filesystems scans to complete...
  → 15:06→ Filesystems scans completed...
  → 15:06→ Performing check of embedded pathnames...
  → 15:07→ Security report completed for localhost.
  → Security report is in `/var/log/tiger/security.report.localhost.190115-14:57'.


- security report will be generated in the ./log
→ ...
→ Security report is in `log//security.report.tecmint.181229-11:12'.
$ sudo cat log/security.report.tecmint.181229-11\:12

To display more information on a specific security message:
 - run the tigexp (TIGer EXPlain) command and provide the msgid as an argument, where “msgid” is the text inside the [] associated with each message.

For example, to get more information about the following messages, where [acc001w] and [path009w] are the msgids:

--WARN-- [acc015w] Login ID nobody has a duplicate home directory (/nonexistent) with another user.
--WARN-- [path009w] /etc/profile does not export an initial setting for PATH.
Telegram Notifications
(Original post, spanish)
- Telegram API allows to easely craete bots to notify any event in our server (someone logged-in through ssh, ...)

ºPRE-SETUPº
- PRE-SETUP STEP 1: Go to:
  - Alt 1 (mobile)
  - Alt 2: web browser

 (BotFather is the "fathers of all Bots", created by Telegram to easify bot creation)

- PRE-SETUP STEP 2:
  -  type  "/newbot"
  - We will be asked for the name of our new bot
    (Name must end with bot)
  - Once the bot name is accepted we will receive the Oºaccess-token to the HTTP APIº
    NOTE:  each Telegram user has an ID that can be retrieved starting the "@userinfobot" bot)

- PRE-SETUP STEP 3: (Optional)
  - If we want to add the Bot to a group or channel to send messages or any other function, we must
    retrieve the Group or Channel ID. To do so send a message to @ChannelIdBot from the channel/group.

ºCREATE-THE-BOTº
- CREATE-THE-BOT STEP 1: Write a test script
  $ editor Oºmy_bot_telegram.shº  # ← edit a new script  (using vim/nano/"editor"...)
  _____________________________________________________
  #!/bin/bash

  TOKEN="XXXXXXXXX:XXXXXXX_XXXXXXXXXXXXXXXXXXXXXXXXXXX" # Replace with token provided
  ID="escribe tu id aquí"                               # Replace with ID Provided
  MENSAJE="This is a ºtestº message"
  URL="https://api.telegram.org/bot$TOKEN/sendMessage"

  curl -s -X POST $URL -d chat_id=$ID -d text="$MENSAJE"
  _____________________________________________________

- CREATE-THE-BOT STEP 2: Set exec permissions
  $ sudo chmod +x Oºmy_bot_telegram.shº

- CREATE-THE-BOT STEP 3: Test it!
  $ Oº./my_bot_telegram.shº

ºExample use 1: Show ssh connection IPº
Add some lines similar to the next one to ~/.bashrc:

TELEGRAM_TOKEN="?????????:???????????????????????????????????" # Replace with token provided
TELEGRAM_ID="?????????"                                        # Replace with ID Provided
URL="https://api.telegram.org/bot${TELEGRAM_TOKEN}/sendMessage"
# We want to exec curl only if it's a real ssh login
# ps -ef | egrep "..." will execute next line only if a match exists.
# The match only exists if our parent process is sshd (real ssh access)
# Using the regex "$PPID.º[s]shd" instead of just "$PPID.ºsshd" filters out the egrep command
ps -ef | egrep "$PPID.*[s]shd" && \
  curl -s -X POST $URL -d chat_id=${TELEGRAM_ID} -d text="New ssh connection to $HOSTNAME from IP $SSH_CLIENT " 1˃/dev/null 2˃⅋1 ⅋


ºExample use 2: Notify server restartsº
$ crontab -e
(add next multi-line removing new lines and ending '\')
@reboot ( sleep 100 ;  curl -s -X POST \
  https://api.telegram.org/bot"$TELEGRAM_TOKEN"/sendMessage \
  -d chat_id="${TELEGRAM_ID}" \
  -d text="new ssh with IP ${SSH_CLIENT}")
logwatch
perl script utility to summarize logs. Ussage:
(Doesn't look to work on Fedora, works on Debian)

$ sudo logwatch --detail Low --range today
(output will be similar to)
→  ################### Logwatch 7.4.3 (12/07/16) ####################
→         Processing Initiated: Wed Jul 31 17:41:33 2019
→         Date Range Processed: today
→                               ( 2019-Jul-31 )
→                               Period is day.
→         Detail Level of Output: 0
→         Type of Output/Format: stdout / text
→         Logfiles for Host: 24x7
→  ##################################################################
→
→  --------------------- dpkg status changes Begin ------------------------
→
→  Installed:
→     libdate-manip-perl:all 6.57-1
→     libsys-cpu-perl:amd64 0.61-2+b1
→     libsys-meminfo-perl:amd64 0.99-1
→     logwatch:all 7.4.3+git20161207-2
→
→  ---------------------- dpkg status changes End -------------------------
→
→
→  --------------------- pam_unix Begin ------------------------
→
→  su:
→     Sessions Opened:
→        root → www-data: 2 Time(s)
→
→  sudo:
→     Sessions Opened:
→        user1 → root: 2 Time(s)
→
→  systemd-user:
→     Unknown Entries:
→        session closed for user www-data: 2 Time(s)
→        session opened for user www-data by (uid=0): 2 Time(s)
→
→
→  ---------------------- pam_unix End -------------------------
→
→
→  --------------------- SSHD Begin ------------------------
→
→
→  Deprecated options in SSH config:
→     KeyRegenerationInterval - line 19
→     RSAAuthentication - line 31
→     RhostsRSAAuthentication - line 38
→     ServerKeyBits - line 20
→
→  Users logging in through sshd:
→     user1:
→        81.61.178.46 (81.61.178.46.dyn.user.ono.com): 1 time
→
→  ººUnmatched Entriesºº
→  reprocess config line 31: Deprecated option RSAAuthentication : 1 time(s)
→  reprocess config line 38: Deprecated option RhostsRSAAuthentication : 1 time(s)
→
→  ---------------------- SSHD End -------------------------
→
→
→  --------------------- Sudo (secure-log) Begin ------------------------
→
→
→  user1 => root
→  ---------------
→  /bin/bash                      -   1 Time(s).
→  /usr/sbin/logwatch             -   1 Time(s).
→
→  ---------------------- Sudo (secure-log) End -------------------------
→
→
→  --------------------- Disk Space Begin ------------------------
→
→  Filesystem      Size  Used Avail Use% Mounted on
→  /dev/root        47G   25G   20G  55% /
→
→
→  ---------------------- Disk Space End -------------------------
→
→
→  ###################### Logwatch End #########################
Logreduce(IA)filter
@[https://opensource.com/article/18/9/quiet-log-noise-python-and-machine-learning]
@[https://pypi.org/project/logreduce/] logreduce@pypi
  - Quiet log noise with Python and machine learning
Text Mng
Text view tools
Displaying text:
$ head -n 20 /path/to/textFile # shows first 20 lines (-n 20). 10 lines by default if -n not provided.
$ tail -n 20 /path/to/textFile # shows last  20 lines (-n 20). 10 lines by default if -n not provided.
$ tail -f    /path/to/textFile # shows stream ("f"lush) of lines as they are appended to the file
$ less       /path/to/textFile # Views text. Add scroll control backwards and forwards.
                               # embedded systems use "more", that just scroll forwards.
$ cat        /path/to/textFile # dump text content to standard output (STDOUT)
$ cat file1 file2 file3 ....   # concatenates files'content and dumps into STDOUT
$ tac file1 file2 file3 ....   # concatenates files'content and dumps into STDOUT in reverse order


man 1 column
$ column -n 140 /usr/share/dict/words  # ← One token per line input
→ A             archdeacon      effort's      mads             salient's
→ A's           archdeacon's    effortless    madwoman         salients
→ ...
→ archbishop    effigy          madras        salesperson's    étude's
→ archbishop's  effigy's        madras's      salespersons     études
→ archbishopric effluent        madrases      saleswoman

$ COLUMNS=100 column -t -s, input.csv # ← -t(able):
1     2     3                             -s(eparator): Indicates column sep
a     b     c                                          (white space by default)
x     y     z
Text info tools
$ wc /path/to/textFile  ← Display total count of words, lines and bytes
                          Options:
                            -w count only words
                            -l count only lines
                            -w count only bytes


$ diff file1 file2     ← Compares two text files and output a difference report indicating:
                         '˃ line_in_file2_not_in_file1
                         '˂ line_in_file1_not_in_file2
$ sdiff                ← Similar to diff but with report in colum mode (more human readable)
$ diff3                ← diff for three files
Text modification tools
By default we can use next tools like:
   $ tool inputFile   # Apply to given file
or
   $ command1 ... | tool1 | tool2 | tool3 # Use stdout from previous command (|) as input

sorting content:
$ sort file1  ← Sorting alphabetically lines in file  (-r to reverse, -g for numerical sort)
j
$ sort file1 -t ':' -k 4 -k 1    ← Use ':' as separator, sort first by column 4, then column 1.

$ join file1 file2    ← Join two lines together assuming they share at least one common value
                        on the relevant line, skiping lines withouth common value.

Cut by column:
$ cut -d "," -f 1,3,7 file1.csv ← Use "," as column delimiter, show then columns 1,3,7
$ cut -c 1-50         file.txt  ← show characters 1 to 50 in each line
$ cut -5, 8, 20-      file.txt  ← show characters 1 to 5 and 8 and from 20 to the end

        This would display (“cut”) characters (columns) 1 to 5, 8 and from 20 to the end.

$ uniq file1     ←  Eliminates duplicate entries from a file
                    Commonly used with sort like:
                    $ cat file.txt | sort | uniq
                    Options: -c: display number of occurances of each duplicate
                             -u: list only unique entries
                             -d: list only duplicate entries

$ tr "u" "d" file1  ← translate all instances of characters in a text file
$  cat some_file | tr '[A-Z]' '[a-z]' ˃ new_file  ← Convert all capital letters to lowercase

$ nl file1.txt  ← Display file1.txt to STDOUT prefixing with line numbers

$ sed "s/Gºup*/Oºdown*/Bºgº" file1.txt ← Replaces "Gºupº" by "Oºdownº". "Bºgº" flags indicates to replace all ocurrences. Otherwise only first is replaced.
sed stays for "Stream editor", and has a lot of powerful flags like searching for regular expresions ...
$ awk 
Vim Cheat Sheet
Global                                            Insert(Append) mode
| :help keyword - open help for keyword           |
| :saveas file - save file as                     | i - insert before the cursor
| :close - close current pane                     | I - insert at the beginning of the line
| K - open man page for word under the cursor     | a - insert (append) after the cursor
                                                  | A - insert (append) at the end of the line
ºCursor movement (Prefix command with "number" toº| o - append (open) a new line below current line
                 ºrepeat it "number" times)º      | O - append (open) a new line above current line
| h - move cursor left                            | ea - insert (append) at the end of the word
| j - move cursor down                            | Esc - exit insert mode
| k - move cursor up
| l - move cursor right
| H - move to top of screen                   ºEditingº
| M - move to middle of screen                | r - replace a single character
| L - move to bottom of screen                | J - join line below current one with one sp. in between
| w - jump forwards to the start of a word    | gJ - join line below current one without sp. in between
| W - jump forwards to the start of a word    | gwip - reflow paragraph
|     (words can contain punctuation)         | cc - change (replace) entire line
| e - jump forwards to the end of a word      | C - change (replace) to the end of the line
| E - jump forwards to the end of a word      | c$ - change (replace) to the end of the line
|     (words can contain punctuation)         | ciw - change (replace) entire word
| b - jump backwards to the start of a word   | cw - change (replace) to the end of the word
| B - jump backwards to the start of a word   | s - delete character and substitute text
|     (words can contain punctuation)         | S - delete line and substitute text (same as cc)
| % - move to matching character              | xp - transpose two letters (delete and paste)
|     '('←→')', '{'←→'}', '['←→']'            | u - undo
|     :h matchpairs in vim for more info      | Ctrl + r - redo
| 0 - jump to the start of the line           | . - repeat last command
| ^ - jump to the first non-blank line char.
| $ - jump to the end of the line
| g_ - jump to last non-blank line-char       ºvisual mode (makes life much simpler!!)º
| gg - go to first line of document           | v - start visual mode, mark lines, then do cmd.
| G -  go to the last line of document        | V - start linewise visual mode
| 5G - go to line 5                           | o - move to other end of marked area
| } - jump to next paragraph/function/block   | Ctrl + v - start visual block mode
| { - jump to prev paragraph/function/block   | O - move to other corner of block
| zz - center cursor on screen                | Esc - exit visual mode
| Ctrl + e - move screen down one line
|     without moving cursor                   ºSearch/replaceº
| Ctrl + y - move screen up one line          | /pattern - search for pattern
|     without moving cursor                   | ?pattern - search backward for pattern
| Ctrl + b - move back one full screen        | \vpattern - 'very magic' pattern: non-alphanumeric
| Ctrl + f - move forward one full screen     |             characters are interpreted as special
| Ctrl + d - move forward 1/2 a screen        |             regex symbols (no escaping needed)
| Ctrl + u - move back 1/2 a screen           | n - repeat search in same direction
                                              | N - repeat search in opposite direction
                                              | :%s/old/new/g - replace all old with new throughout file
                                              | :%s/old/new/gc - replace all old with new throughout file
                                              |                  with confirmations
                                              | :noh - remove highlighting of search matches


ºVisual commands                Registers                            Marks                                   Macrosº
| ˃ - shift text right         | :reg - show registers content      | :marks - list of marks                | qa - record macro a
| ˂ - shift text left          | "xy - yank into register x         | ma - set current position for mark A  | q - stop recording macro
| y - yank (copy) marked text  | "xp - paste contents of register x | `a - jump to position of mark A       | @a - run macro a
| d - delete marked text       |                                    | y`a - yank text to position of mark A | @@ - rerun last run macro
| ~ - switch case              | Tip: Registers persists restart
                               |      (stored in ~/.viminfo)
                               | Tip: Register 0 contains always
                               |      the value of the last yank

ºCut and paste                                            Exitingº
| yy - yank (copy) a line                                | :w - write (save) the file, but don't exit
| 2yy - yank (copy) 2 lines                              | :wq or :x or ZZ - write (save) and quit
| yw - yank (copy) the characters of the word from       | :q - quit safely (fails on unsaved changes)
|                  the cursor position to the start of   | :q!- force quit (ignore unsaved changes)
|                  the next word                         | :wqa - write (save) and quit on all tabs
| y$ - yank (copy) to end of line                        |
| p - put (paste) the clipboard after cursor
| P - put (paste) before cursor
| dd - delete (cut) a line
| 2dd - delete (cut) 2 lines
| dw - delete (cut) the characters of the word from the
|      cursor position to the start of the next word
| D - delete (cut) to the end of the line
| d$ - delete (cut) to the end of the line
| x - delete (cut) character


 ºWorking with multiple files                                             Tabsº
| :e file - edit a file in a new buffer                                 | :tabnew or :tabnew file - open a file in a new tab
| :bnext or :bn - go to the next buffer                                 | Ctrl + wT - move the current split window into its own tab
| :bprev or :bp - go to the previous buffer                             | gt or :tabnext or :tabn - move to the next tab
| :bd - delete a buffer (close a file)                                  | gT or :tabprev or :tabp - move to the previous tab
| :ls - list all open buffers                                           | #gt - move to tab number #
| :sp file - open a file in a new buffer and split window               | :tabmove # - move current tab to the #th position (indexed from 0)
| :vsp file - open a file in a new buffer and vertically split window   | :tabclose or :tabc - close the current tab and all its windows
| Ctrl + ws - split window                                              | :tabonly or :tabo - close all tabs except for the current one
| Ctrl + ww - switch windows                                            | :tabdo command - run the command on all tabs (e.g. :tabdo q - closes all opened tabs)
| Ctrl + wq - quit a window
| Ctrl + wv - split window vertically
| Ctrl + wh - move cursor to the left window (vertical split)
| Ctrl + wl - move cursor to the right window (vertical split)
| Ctrl + wj - move cursor to the window below (horizontal split)
| Ctrl + wk - move cursor to the window above (horizontal split)

History of vim
@[https://begriffs.com/posts/2019-07-19-history-use-vim.html]
Services
SystemD
@[http://freedesktop.org/wiki/Software/systemd/]
man page,
See also
"service unit"                  "targets"
- createNew                       unit_collection
- run                             "wants"
- lifespan:daemon|run-once

Check unit_collections:          Check status of a service:
# systemctl --type=service       # systemctl status firewalld.service

Change to given runlevel (run-level)
$ sudo ºsystemctl isolateº multi-user.target
                           ^^^^^^^^^^^^^^^^^


# (sudo) systemctl daemon-reload
# (sudo) systemctl \
   enable|start|stop|restart|disable \
     firewalld.service

# sudo vim /etc/systemd/system/MyCustomScript.service\
  | [Unit]
  | Description = making network connection up
  | After = network.target
  | [Service]
  | ExecStart = /root/scripts/conup.sh
  | [Install]
  | WantedBy = multi-user.target


ºSystemd      |Systemd      |Systemd           | Systemdº
ºUtilities    |Daemons      |Targets           | Core   º

$ systemctl  |systemd      | bootmode         | manager
$ journalctl |journald     | basic            | systemd
$ notify     |networkd     | shutdown
$ analyze    |logind       | reboot
$ cgls       |user session |
$ cgtop                    | multiuser
$ loginctl                 | dbus dlog, logind
$ nspawn                   |
                           | graphical
                           | user-session
                           | display service

FILE NAME EXTENSIONS FOR UNIT TYPES:
ºOº.target     *: define groups of units. They achieve little themselves and serve to call
 Oº            º  other units that are responsible for services, filesystems ...
 Oº            º  (equivalent to the classical SysV runlevels)
ºOº.service    *: handle services that SysV-init-based distributions will typically
 Oº            º  start or end using init scripts.
ºOº.(auto)mount*: mounting and unmounting filesystems
ºOº.path       *: allow systemd to monitor files and directories specified
 Oº            º  when an access happens in path, systemd will start the appropriate unit
ºOº.socket     *: create one or more sockets for socket activation.
 Oº            º  service unit associated will start the service when a connection request
 Oº            º  is received.

CONFIG. FILE LAYOUT:
(NOTE: /etc takes precedence over /usr)
OºMaintainer   º: /usr/lib/systemd/system              ( + $ systemctl daemon-reload)
OºAdministratorº: /etc/systemd/system/[name.type.d]/ ) ( + $ systemctl daemon-reload)
Oºruntime      º: /runtime/systemd/system
  Journalctl(find logs)
- Display/filter/search system logs
# journalctl                      # ← all logs
# journalctl -b                   # ← Boot Messages
# journalctl -b -1                # ← Last Boot Messages
# journalctl --list-boots         # ← list system boots
# journalctl --since "3 hour ago" # ← Time range
                     "2 days ago" #
    --until "2015-06-26 23:20:00" #
# journalctl -u nginx.service     # ← by unit (can be specified multiple times)
# journalctl -f                   # ← Follow ("tail")
# journalctl -n 50                # ← most recent (50) entries
# journalctl -r                   # ← reverse chronological order
# journalctl -b -1  -p "crit"     # ← By priority:
                                  # ←   -b -1     : FROM emergency
                                  # ←   -p "crit" : TO: Critical
# journalctl _UID=108             # ← By _UID
---------------------------------------------------------------------
Output Formats ( -o parameter )

   json: json one long-line
   json-pretty:
   verbose:
   cat:  very short form, without any date/time or source server names
   short: (default), syslog style
   short-monotonic: similar to short, but the time stamp second value is shown with precision


BºNOTE:º journal is "synchronous". Eacth time someone tries to write it checks if
        ther is space or something needs to be deleted. (vs remove each 24 day,...)
Rsyslog
@[github.com/rsyslog/rsyslog.git]
@[https://www.rsyslog.com]

The Rocket-fast Syslog Server
- Year: 2004
- (primary) author: Rainer Gerhards
- Implements and extends syslog protocol (RFC-5424)
- Adopted by RedHat, Debian*, SuSE, Solaris, FreeBSD, ...
 RºReplaced by journaldº in Fedora 20+

Important extensions include:
- ISO 8601 timestamp with millisecond and timezone
- addition of the name of relays in the host fields
  to make it possible to track the path a given message has traversed
- reliable transport using TCP
- GSS-API and TLS support
- logging directly into various database engines.
- support for RFC 5424, RFC 5425, RFC 5426
- support for RELP (Reliable_Event_Logging_Protocol)
- support for buffered operation modes:
  messages are buffered locally if the receiver is not ready
- complete input/output support for systemd journal
- "Infinite" logs. Can store years of logs from
                   hundreds of machines.

Journald
@[https://www.loggly.com/blog/why-journald/]
@[https://docs.google.com/document/pub?id=1IC9yOXj7j6cdLLxWEBAGRL6wl97tFxgjLUEHIX3MSTs]
- system service for collecting and storing log data,
  introduced with systemd.
- easier for admins to find relevant info.
- replaces simple plain text log files with a special file format
  optimized for log messages with index-like queries,
  adding Structure to Log Files
- It does ºnotº include a well-defined remote logging implementation,
  relying on existing syslog-protocol implementations to relay
  to a central log host,(and Rºlosing most of the benefitsº)
- retains full syslog compatibility by providing the same API in C,
  supporting the same protocol, and also forwarding plain-text versions
  of messages to an existing syslog implementation.
  Obviously the format, as well as the journald API allow for structured data.

Syslog-protocol Problems:
- syslog implementations (ussually) write log messages to plain text files
  with lack of structure.
- syslog protocol does ºNOTº provide a means of separating messages
  by application-defined targets (for example log messages per virt.host)
  This means that, for example, web servers generally write their own access
  logs so that the main system log is not flooded with web server status messages.
- log files write messages terminated by a newline:
  (very) hard for programs to emit multi-line information such as backtraces
  when an error occurs, and log parsing software must often do a lot of work
  to combine log messages spread over multiple lines.

journalctl:
- The journald structured file format does not work well with standard
  UNIX tools optimized for plain text. The journalctl tool will be used.
- very fast access to entries filtered by:
  date, emitting program, program PID, UID, service, ...
- Can also access backups in single files or directories of other systems.

Modern
logging

- Modern architectures use many systems where it becomes impractical to
  read logs on individual machines.
- Centralized logging are usually stored in a (time-series) database
  address many of the same issues that journald does without the problems
- Journald allows applications to send key-value fields that the
  centralized systems could use directly instead of relying on these heuristics.
RºSadly, journald does not come with a usable remote logging solutionº.
  - systemd-journal-remote is more of a proof-of-concept than an actually
    useful tool, lacking good authentication among other things.
application install
RPM
- Understanding Meta Package Handlers
- Setting up Yum Repositories
- Using Repositories
- Managing Packages with yum
- Using Yum Groups
- Understanding yum and RPM Queries
- Using RPM Queries

DNF ("yum++") @[https://dnf.readthedocs.io/en/latest/] rpm soft. management @ GitHub ºMOST COMMONLY USED:º $ dnf search "ºmyPackageº" # ← Show matching pattern in package name|description $ dnf list "ºmyPackageº" # ← Show matching pattern in package name Filter like: $ sudo dnf install myNewPackage # ← Install package and dependencies (-y flag to avoid confirmation prompt) $ sudo dnf history # ← Check dnf install history $ sudo dnf history undo 13 # ← Undo/rollback install $ sudo dnf upgrade # ← Upgrade all upgradable packages (patch security bugs) ºPackage Report:º $ dnf list ºinstalledº # ← report all installed packages $ dnf list ºavailableº # ← report all available packages in any accessible repository $ dnf list ºobsoletesº # ← report obsoleted by packages in any accessible repository $ dnf list ºrecent º # ← report packages recently added into accessible repositories $ dnf list ºupgrades º # ← report available packages upgrading ºAvoid dnf/yum update certain packages:º (This can be needed in critical systems where no downtime is allowed for some service) - Add next line to /etc/dnf/dnf.conf (new Fedora/RedHat distros) or /etc/yum.conf ( old Fedora/RedHat distros) |exclude=kernel* another_package_name_or_name_pattern ºList all available versions of a package º YUM: sorting by version number: $ yum list docker-ce --showduplicates | sort -r ºReport (remote) package repositoriesº $ dnf repolist (example output) → ... → Using metadata from Mon Sep 10 16:21:18 2018 → repo id repo name → base CentOS-7 - Base → centos-openshift-origin CentOS OpenShift Origin → centos-sclo-rh CentOS-7 - SCLo rh → centos-sclo-sclo CentOS-7 - SCLo sclo → code Visual Studio Code → docker-ce-stable Docker CE Stable - x86_64 → *epel Extra Packages for Enterprise Linux 7 - x86_64 12,672 → extras CentOS-7 - Extras → go-repo go-repo - CentOS → nodesource Node.js Packages for Enterprise Linux 7 - x86_64 144 → openlogic CentOS-7 - openlogic packages for x86_64 113 → pgdg94 PostgreSQL 9.4 7 - x86_64 → updates CentOS-7 - Updates ºFull command list:º clean Performs cleanup of temporary files kept for repositories. This includes any such data left behind from disabled or removed repositories as well as for different distribution release versions. distro-sync As necessary upgrades, downgrades or keeps selected installed packages to match the latest version available from any enabled repository. If no package is given, all installed packages are considered. downgrade Downgrades the specified packages to the highest installable package of all known lower versions if possible. When version is given and is lower than version of installed package then it downgrades to target version. group Groups are virtual collections of packages. DNF keeps track of groups that the user selected (“marked”) installed and can manipulate the comprising packages with simple commands. (See only manual for more info) help history The history command allows the user to view what has happened in past transactions and act according to this information (assuming the history_record configuration option is set). info list description and summary information about installed and available packages. install DNF makes sure that the given packages and their dependencies are installed on the system. Each can be either a , or a @. See Install Examples. If a given package or provide cannot be (and is not already) installed, the exit code will be non-zero. When that specify exact version of the package is given, DNF will install the desired version, no matter which version of the package is already installed. The former version of the package will be removed in the case of non-installonly package. There are also a few specific install commands install-n, install-na and install-nevra that allow the specification of an exact argument in NEVRA format. dnf install tito Install package tito (tito is package name). dnf install ~/Downloads/tito-0.6.2-1.fc22.noarch.rpm Install local rpm file tito-0.6.2-1.fc22.noarch.rpm from ~/Downloads/ directory. dnf install tito-0.5.6-1.fc22 Install package with specific version. If the package is already installed it will automatically try to downgrade or upgrade to specific version. dnf --best install tito Install the latest available version of package. If the package is already installed it will automatically try to upgrade to the latest version. If the latest version of package cannot be installed, the installation fail. dnf install vim DNF will automatically recognize that vim is not a package name, but provide, and install a package that provides vim with all required dependencies. Note: Package name match has precedence over package provides match. dnf install https://.../packages/tito/0.6.0/1.fc22/noarch/tito-0.6.0-1.fc22.noarch.rpm Install package directly from URL. dnf install '@Web Server' Install environmental group ‘Web Server’ dnf install /usr/bin/rpmsign Install a package that provides /usr/bin/rpmsign file. dnf -y install tito --setopt=install_weak_deps=False Install package tito (tito is package name) without weak deps. Weak deps are not required for core functionality of the package, but they enhance the original package (like extended documentation, plugins, additional functions, …). list Dumps lists of packages depending on the packages’ relation to the system . A package is installed if it is present in the RPMDB, and it is available if it is not installed but it is present in a repository that DNF knows about. The list command can also limit the displayed packages according to other criteria, e.g. to only those that update an installed package. The exclude option in configuration file (.conf) might influence the result, but if the command line option --disableexcludes is used, it ensure that all installed packages will be listed makecache dnf [options] makecache Downloads and caches in binary format metadata for all known repos. Tries to avoid downloading whenever possible (e.g. when the local metadata hasn’t expired yet or when the metadata timestamp hasn’t changed). dnf [options] makecache --timer Like plain makecache but instructs DNF to be more resource-aware, meaning will not do anything if running on battery power and will terminate immediately if it’s too soon after the last successful makecache run (see dnf.conf(5), metadata_timer_sync). mark dnf mark install ... Marks the specified packages as installed by user. This can be useful if any package was installed as a dependency and is desired to stay on the system when Auto Remove Command or Remove Command along with clean_requirements_on_remove configuration option set to True is executed. dnf mark remove ... Unmarks the specified packages as installed by user.... dnf mark group ... Marks the specified packages as installed by group... module provides Finds the packages providing the given . This is useful when one knows a filename and wants to find what package (installed or not ) provides this file. reinstall remove repoinfo repolist repoquery dnf [options] repoquery [] [] [] Searches the available DNF repositories for selected packages and displays the requested information about them. It is an equivalent of rpm -q for remote repositories. dnf [options] repoquery --querytags Provides list of recognized tags by repoquery option --queryformat There are also a few specific repoquery commands repoquery-n, repoquery-na and repoquery-nevra that allow the specification of an exact argument in NEVRA format (does not affect arguments of options like – whatprovides , …). repository-packages search Search package metadata for the keywords. Keywords are matched as case- insensitive substrings, globbing is supported. By default lists packages that match all requested keys (AND operation). Keys are searched in package names and summaries. If option “–all” is used, lists packages that match at least one of keys (OR operation). And in addition keys are searched in package descriptions and URLs. The result is sorted from the most relevant results to the least. shell swap Remove spec and install spec in one transaction. Each "spec" can be either a , which specifies a package directly, or a @"group-spec", which specifies an (environment) group which contains it. Automatic conflict solving is provided in DNF by –allowerasing option that provides functionality of swap command automatically updateinfo Display information about update advisories. Depending on output type, DNF displays just counts of advisory types ( omitted or --summary), list of advisories (--list) or detailed information (--info). When --info with -v option is used, the information is even more detailed upgrade upgrade-minimal Upgrade-minimal Command $ dnf [options] upgrade-minimal Updates each package to the latest version that provides bugfix, enhancement or fix for security issue (security) $ dnf [options] upgrade-minimal ... Updates each specified package to the latest available version that provides bugfix, enhancement or fix for security issue (security). Updates dependencies as necessary. upgrade-to - ADDITIONAL INFORMATION: Options Specifying Packages Specifying Exact Versions of Packages Specifying Provides Specifying Groups Specifying Transactions Metadata Synchronization Configuration Files Replacement Policy Files See Also Software Collections Ex: $ python --version Python 2.7.5 $ scl enable rh-python35 bash $ python --version Python 3.5.1 # https://wiki.centos.org/SpecialInterestGroup/SCLo # http://fedoraproject.org/wiki/EPEL """ Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS ... usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions.... """ Oº# Install from given repository:º $ sudo yum --disablerepo=\* --enablerepo=my-cool-repo install myPackage Ex: Install Development Tools ... º$ dnf groupinfo "Development Tools"º → Group: Development Tools → Description: A basic development environment. → Mandatory Packages: → autoconf → automake → binutils → ... → Default Packages: → byacc → cscope → ... → Optional Packages: → ElectricFence → ant → babel ... º$ sudo dnf groupinstall "Development Tools"º
FlatPak
ºSandboxesº
- each application is built and run in an isolated 'sandbox' environment
  containing an application and its runtime.
- By default, the application can only access the contents of its sandbox.
- Access to user files, network, graphics sockets, subsystems on the bus and devices
  have to be explicitly granted.
- Access to other things, such as other processes, is deliberately NOT possible.

-  resources inside the sandbox that need to be exposed outside are known as "exports".
   (Ex: .desktop file and icon,...)

ºPortalsº
- Mechanism through which applications can interact with the host environment
  from within a sandbox. They give the ability to interact with data, files
  and services without the need to add sandbox permissions.

- Examples of capabilities that can be accessed through portals include opening
  files through a file chooser dialog, or printing.
- Interface toolkits can implement transparent support for portals, so access
  to resources outside of the sandbox will work securely and out of the box.

ºRepositoriesº
- applications and runtimes are typically stored and published using repositories,
 which behave very similarly to Git repositories, containing a single object or
 multiple ones. Each object is versioned, allowing the up|down-grade.
- Remote repository's content can be inspected and searched, and it can
  be used as the source of applications and runtimes.
- When an update is performed, new versions of installed applications and
  runtimes are downloaded from the relevant remotes. (only the difference actually)

flatpak cli
NOTE: graphical software management tools exists so command-line is optional

$ flatpak install

$ flatpak uninstall

ºIdentifiersº
- unique three-part like
  com.company.App
              ^^^
      final segment is the object's name


ºIdentifier tripleº

   com.company.App/i386/stable         com.company.App//stable       com.company.App/i386//
   ^^^^^^^^^^^^^^^ ^^^^ ^^^^^^                        ^^                                 ^^
    Application-ID Arch branch                   Use default                        Use default
    or name                                      architecture                       branch


ºSystem versus userº
- Flatpak commands/repositories can be used/applied:
  - system-wide (default)
  - per-user    (--user option)

ºBasic commandsº
$ flatpak remotes # ← list remotes repositores configured on the system
(output list indicates whether remote has been added per-user or system-wide)


$ OºURL_FLATPAKREPOº="https://dl.flathub.org/repo/flathubº.flatpakrepoº
                                                          ^^^^^^^^^^^
                                                       a .flatpakrepo file
                                                       contains:
                                                       - remote details
                                                       - remote GPG key
$ flatpak remote-add \          ← Add a remote repository
     --if-not-exists \
     flathub \                  ← arbitrary local-name assigned to remote
     Oº${URL_FLATPAKREPO}º

$ flatpak remote-delete flathub ← Remove remote-repo from list of known remotes

$ flatpak search gimp           ← return any applications matching search terms.
(each row-result includes application ID + remote where it was found)


$ FLATPAKREF="https://flathub.org/repo/appstream/org.gimp.GIMP.flatpakref"
$ flatpak install ${FLATPAKREF}         ← install application from FLATPAKREF
$ flatpak install flathub org.gimp.GIMP ← install application from remote and ID
                  ^^^^^^^ ^^^^^^^^^^^^^
                  remote  application ID
  (the application runtime will also be instaled if not yet present)

º$ flatpak run org.gimp.GIMP     ← Run the applicationº

$ flatpak update                ← update all your installed applications
                                  and runtimes to the latest version

$ flatpak list                  ← List installed applications and runtimes

$ flatpak list --app            ← List installed applications only

$ flatpak uninstall ${APP_ID}   ← Uninstall Application

ºFull Journeyº
# Add remote "repository"
$ flatpak remote-add --user --from gnome https://sdk.gnome.org/gnome.flatpakrepo
# Install monodevelop from remote repository
$ flatpak install --user --from https://download.mono-project.com/repo/monodevelop.flatpakref
# Run monodevelop
$ flatpak run com.xamarin.MonoDevelop
APT(Debian/Ubuntu)
  Package source list: /etc/apt/sources.list file


$ sudo apt-get  install ubuntu-desktop  "package2" ... (-s to simulate)
$ sudo aptitude install ubuntu-desktop  ← Ncurses variant


$ sudo auto-apt run "./configure" ← runs "command_string", installing uninstalled packages when possible.
  ^ keeps databases up-to-date by calling:
    $ auto-apt update
    $ auto-apt updatedb
    $ auto-apt update-local

$ apt-get update ← Run periodically and after modifications to /etc/apt/*

$ apt-get upgrade ← upgrade all installed packages.

$ apt-get dist-upgrade ← same as upgrade, except add the "smart upgrade" checkbox.
                         It tells APT to use "smart" conflict resolution system,
                         and ºit will attempt to upgrade the most important packages º
                        ºat the expense of less important ones if necessary.º
                         ¡¡¡does not upgrade from a previous version of Ubuntu!!!

ºDiagnostics:º
$ apt-get check    ← update package lists and checks for broken dependencies

ºCleaning/Removalº
$ sudo apt-get autoclean  ← removes .deb files for packages that are no longer
                            installed on your system. (Can save space in /var/cache/apt/archives)
$ apt-get clean           ← same as autoclean, except it removes all packages from the package cache.
                            Not recomended with slow-connections.
                            ($ du -sh /var/cache/apt/archives)

$ sudo apt-get remove "package_name" ← keeps  configuration files
$ sudo apt-get purge  "package_name" ← Remove configuration files

$ apt-get autoremove     ← removes packages no longer needed

ºTroubleshootingº
$ sudo apt-get -f install  ← Fix Broken Packages (system complains about "unmet dependencies")

$ dpkg-reconfigure "package_name"


$ echo "'package_name' hold" | \  ← Put package_name on hold
  sudo dpkg --set-selections        may have the unintended side effect of preventing upgrades
                                    to packages that depend on updated versions of the pinned package.
                                    apt-get dist-upgrade will override this, but will warn you first.

ºSearch commandsº
$ apt-cache search "search_term" ← lists packages matching in name or description

$ dpkg -l º"search_term"º  ← find packages whose names contain "search_term".
                             It also shows whether a package is installed on your system

$ dlocate "package_name"         ← "reverse lookup". It Determines which installed package owns "package_name".
                                    It shows files from installed packages that match "package_name",
                                    with the name of the package they came from.
                                    Its equivalent to the slower (but installed by default):
                                    $ dpkg -S "filename_search_pattern"
$ sudo apt-file update ⅋⅋ \       ← Similar to dlocate  but searches over all available packages.
                                    """what package provides this file?"""

  sudo apt-file search \"filepath_pattern"
ºPackage Infoº
$ apt-cache show "package_name"  ← shows the description of "package_name", version, size, dependencies and conflicts.
$ dpkg -L "package_name"         ← list files in package
$ dpkg -c foo.deb                ← lists files in the manually downloaded package "./foo.deb".

$ apt-cache pkgnames             ← provides a list of every package in the system



ºSetting http-proxyº
- alt 1: Temporary proxy session:
  # export http_proxy=http://username:password@yourproxyaddress:proxyport

- alt 2: APT configuration file method
  Add next line to /etc/apt/apt.conf
  | Acquire::http::Proxy "http://yourproxyaddress:proxyport";


ºDeveloper commands:º
$ apt-get build-dep $PACKAGE_NAME
apk (Alpine/"Docker")
Because Alpine Linux is designed to run from RAM, package management involves two phases:

- Installing / Upgrading / Deleting packages on a running system.
- Restoring a system to a previously configured state
  (e.g. after reboot), including all previously installed packages
  and locally modified configuration files. (RAM-Based Installs Only)

- apk is the tool used to install, upgrade, or delete software on a running sytem.
- lbu is the tool used to capture the data necessary to
  restore a system to a previously configured state.


apk:
  add      Add new packages to the running system
  del      Delete packages from the running system
  fix      Attempt to repair or upgrade an installed package
  update   Update the index of available packages
  info     Prints information about installed or available packages
  search   Search for packages or descriptions with wildcard patterns
  upgrade  Upgrade the currently installed packages
  cache    Maintenance operations for locally cached package repository
  version  Compare version differences between installed and available packages
  index    create a repository index from a list of packages
  fetch    download (but not install) packages
  audit    List changes to the file system from pristine package install state
  verify   Verify a package signature
  dot      Create a graphviz graph description for a given package
  policy   Display the repository that updates a given
           package, plus repositories that also offer the package
  stats    Display statistics, including number of
           packages installed and available, number of
           directories and files, etc.
  manifest Display checksums for files contained in a given package


- Software packages for Alpine Linux are digitally signed
  tar.gz archives with ".apk" extension (often called "a-packs")
  stored in one or more repositories (directory with a collection of *.apk files
  and a APKINDEX.tar.gz index )

- The list of repositories to check is stored in /etc/apk/repositories
  (one repo per line)
  Ex:
  /media/sda1/apks
  http://nl.alpinelinux.org/alpine/v3.7/community
  Bº@edgeº http://nl.alpinelinux.org/alpine/edge/main
  Bº@edgecommunityº http://nl.alpinelinux.org/alpine/edge/community
  Bº@testingº http://nl.alpinelinux.org/alpine/edge/testing
  ^ "tagged" repo
    will be used like
    # apk add stableapp newapp@Bºedgeº bleedingapp@Bºtestingº
    by default only untagged repositories are used

Update the Package list
  # apk update (Fill catch locally the latest APKINDEX.tar.gz from remote repos)

Add Packages (transitive dependencies is automatic):
  # apk add openssh openntp vim

Add a packager from dev. repository (dangerous):
  # apk add cherokee --update-cache \
    --repository http://dl-3.alpinelinux.org/alpine/edge/testing/
    --allow-untrusted

Add local Package
  # apk add --allow-untrusted /path/to/file.apk


Remove a Package

  # apk del openssh

Upgrade a Running System

  # apk update
  # apk upgrade

To upgrade only a few packages, use the add command with the -u or --upgrade option:

apk update
apk add --upgrade busybox
Note: Remember that when you reboot your machine, the remote repository will
not be available until after networking is started. This means packages newer
than your local boot media will likely not be installed after a reboot. To
make an "upgrade" persist over a reboot, use a local cache.

Search for Packages
 # apk search -v        ← list all packages along with descriptions
 # apk search -v 'acf*' ← list all packages part of the ACF system
 # apk search -v --description 'NTP' ← list all packages that list NTP as
                                       part of their description,

Information on Packages
  # apk info -a zlib ← -w: show just webpage info
                       -a: show          all info
Storage
File System Management Basics
┌──────────────────┬────────────────────────────────────┐ ┌────────────────┬────────────────────────────────────┐ ┌─────────────────┬─────────────────────────────────────┐
│ºMOVING AROUND FSº│                                    │ │ºLISTING  FILESº│                                    │ │ºHARD/SOFT LINKSº│                                     │
├──────────────────┘                                    │ ├────────────────┘                                    │ ├─────────────────┘                                     │
│ $ºpwdº           ← "P"rint "W"orking "D"irectory      │ │$ºls" -"optionalFlags"   ← list files in current     │ │- In UNIX anOºinode* is the low-level structure        │
│ $*cd*/my/new/dir ← "C"hange "d"irectory               │ │      ^^^^^^^^^^^^^^^^     directory                 │ │  that stores the physical location in disk of         │
│ $ºcdº            ← move to $HOME directory            │ │ -l: ("long"), show permissions, size,               │ │  a given file. This "inode" is not visible to         │
│ $ºcd ~º          ← '~' is alias for $HOME             │ │     modification date, ownership                    │ │  the user.                                            │
│ $ pushd.         ← Remember current dir.              │ │ -a: ("all" ), shows also hidden (.*) files          │ │- The user visible entities are the file system        │
│                    (put on FILO stack)                │ │ -d: Show only directory entry(vs directory contents)│ │  paths, that point to the real inodes:                │
│ $ cd ..   # Change to parent directory                │ │ -F: ("format") append helper symbol                 │ │ /my/file/path →points→Oºinodeº →points→  physical     │
│ $ popd           ← change to last dir. in stack       │ │ -S: sort by size in descending order                │ │                  to               to   block_on_disk  │
└───────────────────────────────────────────────────────┘ │ -R: ("recursive") list recursively children dirs    │ │ ^^^^^^^^^^^^^           ^^^^^^         ^^^^^^^^^^^^^  │
                                                          └─────────────────────────────────────────────────────┘ │ visible in          invisible to      managed by HD,  │
┌──────────────────────┬────────────────────────────┐     ┌──────────────┬──────────────────────────────────────┐ │ user shells,        users, managed    internal circuit│
│ºCREATE NEW DIRECTORYº│                            │     │ºMOVING FILESº│                                      │ │ GUI explorers,..    by the OS kernel  networks NAS,...│
├──────────────────────┘                            │     ├──────────────┘                                      │ │                                                       │
│$ºmkdiRº-p ~/projects/project01 # ←  Make directory│     │$ºmvºmyFile1 /my/path02/  # ← move myFile1 to        │ │                                                       │
│         ↑                                         │     │                       ^      /my/path02 directory   │ │$*ln -s*/my/file/path  /my/symbolic/link               │
│     withouth -p will fail if                      │     │                    ┌──┘                             │ │  ^^ ^^                                                │
│     ~/projects directory does                     │     │RºWARNº: The final '/' indicates taht path02         │ │ ºcreate symbolic (-s) linkº                           │
│     NOT YET exits. With -p it                     │     │         is a directory.                             │ │- A symbolic links is a sortcut                        │
│     will be created automatically                 │     │         Otherwise if "path02" does NOT exits,       │ │  pointing to a visible filepath                       │
└───────────────────────────────────────────────────┘     │         myFile1 will move to the '/my/' directory   │ │                                                       │
                                                          │         and renamed as wrongl 'path02' file.        │ │                                                       │
                                                          └─────────────────────────────────────────────────────┘ │RºWARN:ºIf the original /my/file/path is               │
┌────────────────┬───────────────────────────────┐        ┌──────────────┬──────────────────────────────────────┐ │        deleted or moved the link is broken.           │
│ºCOPYING FILESºº│                               │        │ºRENAME FILES*│                                      │ │                                                       │
├────────────────┘                               │        ├──────────────┘                                      │ │$ºlnº my/file/path  /my/hard/link                      │
│$ºcpº   fileToCopy      /destination/directory/ │        │$ºmvºmyFile1 finalName # ← move myFile1 to new name  │ │  ^^                                                   │
│$ºcpº-R directoryToCopy /destination/directory/ │        │                       ^   (rename) finalName        │ │- With no -s(ymbolyc) flag the link will point         │
│     ^^                                         │        └─────────────────────────────────────────────────────┘ │  to the realOºinodeº. Deleting the original file      │
│     -R: Recursive copy of dir.content          │                                                                │  will not affect the new path.                        │
└────────────────────────────────────────────────┘        ┌───────────────────────────────────────────────┬─────┐ │- In can also be created pointing directly to          │
┌──────────────────────────┬─────────────────────┐        │ºREMOVE FILES AND CONTENT BY OVERWRITING FIRSTº│     │ │  the Oºinodeº.                                        │
│ºREMOVE FILES/DIRECTORIESº│                     │        ├───────────────────────────────────────────────┘     │ │                                                       │
├──────────────────────────┘                     │        │$ºshredº-n 2 -z -v /tmp/myImportantSecret            │ │  /my/symbolic/Link                                    │
│$ºrmº-r -f file                                 │        │           ↑  ↑  ↑                                   │ │   ↓                                                   │
│     ^^ ^^                                      │        │           │  │  └── show progress                   │ │  /my/file/path ──────┐                                │
│     |  force deletetion, do not confirm        │        │           │  └───── Finally write over with zeroes  │ │                      ↓        physical                │
│     |  useful for automated scripts            │        │           └──────── Overwrite twice with random data│ │                   Oºinodeº──→ block                   │
│     |                                          │        │                                                     │ │                      ↑ ↑      on disk                 │
│     Recursive deletion.                        │        │@[https://linux.die.net/man/1/shred]                 │ │  /my/hard/link ──────┘ |                              │
│   RºWARNº: Be very careful,                    │        │prevent data from being recovered by                 │ │                        |                              │
│            specially as root user              │        │hackers using software (and most                     │ │                  - the inode will increase its        │
└────────────────────────────────────────────────┘        │probably hardware)                                   │ │                    number of references after a       │
                                                          └─────────────────────────────────────────────────────┘ │                    hard-link.                         │
                                                                                                                  │                  - the inode (file) will exists       │
┌──────────────────────┬──────────────────────────────┐    ┌────────────────────────────┬────────────────────┐    │                    until all hardlinks(references)    │
│ºSEARCHING FILES/DATAº│                              │    │ºCHECK DISK FREE/USED SPACEº│                    │    │                    are deleted.                       │
├──────────────────────┘                              │    ├────────────────────────────┘                    │    └───────────────────────────────────────────────────────┘
│@[https://linux.die.net/man/1/find]                  │    │@[https://linux.die.net/man/0/df]                │
│                                                     │    │$ºdfº-k -h  -x devtmpfs -x tmpfs # ºDºiskº Fºree │  ┌─────────────────┬───────────────────────────────┐
│$ºfindº           \                                  │    │      ↑  ↑   ↑           ↑                       │  │ºGRAPHICAL TOOLSº│                               │
│   /var/lib       \   ←     find inside /var/lib     │    │      │  │   Skip "false" filesystems(dev,...)   │  ├─────────────────┘                               │
│   -type f        \   ←     objects of type 'file'   │    │      │  └── show in human readable units        │  │- Easier to use, but not scriptable.             │
│                            (ignore dirs, links, ...)│    │      └───── scale size by 1K before printing    │  │- Discouraged for system administration.         │
│   -iname "*html" \   ← AND whose name matches *html │    │@[https://linux.die.net/man/1/du]                │  │                                                 │
│                             name:do NOT ignore case │    │$ºduº-sch dir1 file2 # ºDºisk  ºUºssage          │  │$ºmcº      ← Execute the Midnight Commander      │
│                            iname:do     ignore case │    │      ↑↑↑                                        │  │$ºrancherº ← Execute rancher.                    │
│   -mmin  -30     \   ← AND whose modification time  │    │      ││└── show in human readable units         │  │             Light and Nice Console File Manager │
│                            is '30 or less'(-30)     │    │      │└─── produce a grand total                │  │             with VI Key Bindings                │
│                            minutes (mmin)           │    │      └──── display only total for each arg      │  │$ºnnnº     ← one of the fastest and most         │
│   -msize +20k        ← AND whose size (msize) is    │    └─────────────────────────────────────────────────┘  │             lightweight file managers:          │
│                            20k(ilobytes) or more    │   ┌──────────────────┬──────────────────────────┐       │             ~50KB binary using ~3.5MB resident  │
│                                                     │   │ºFILE PERMISSIONSº│                          │       │             memory at runtime:                  │
│$ºfindº           \   ←     find                     │   ├──────────────────┘                          │       │        @[https://github.com/jarun/nnn#features] │
│   /var/lib       \   ←     inside /var/lib          │   │- Change who can read,write or execute a file│       └─────────────────────────────────────────────────┘
│   -type f        \   ←     object of type 'file'    │   │  $ chmod go-rwx mySecretDir                 │
│   -iname "*html" \   ← AND whose name matches '*html│   │            ↑↑                               │
│ Oº-execº Bºgrep "myStyleSheet.css"ºB*{} \;*         │   │            │└──  (r)ead, (w)rite e(x)ecute  │
│    ^^^^    ^^^^^^^^^^^^^^^^^^^^^^    ^^^^^          │   │            │      4       2        1        │
│+ºexecuteºBºthis commandº           Bºfor each º     │   │            │                                │
│                                   Bºfound fileº     │   │            └───   - remove                  │
└─────────────────────────────────────────────────────┘   │                   + add                     │
                                                          │                                             │
                                                          │- Change someFile owner and group            │
                                                          │  $ chown newOwner:newGroup someFile         │
                                                          └─────────────────────────────────────────────┘
  Monitor files:
┌──────────────────────────────┬─────────────────────────────────────────────────────────────────────────┐
│ºLIST FILES OPEN BY A PROCESSº│                                                                         │
├──────────────────────────────┘                                                                         │
│@[https://linux.die.net/man/8/lsof]                                                                     │
│$ sudOºlsofº-pOº511º                                                                                    │
│             ↑                                                                                          │
│             └── running process with ID=511                                                            │
│                                                                                                        │
│(output will be similar ...)                                                                            │
│  COMMAND   OºPIDº  USER   FD      TYPE     DEVICE SIZE/OFF    NODE NAME                                │
│  avahi-dae Oº511º avahi  cwd       DIR        8,1       67 1274283 /etc/avahi                          │
│  avahi-dae Oº511º avahi  txt       REG        8,1   136264 2568376 /usr/sbin/avahi-daemon              │
│  avahi-dae Oº511º avahi  DEL       REG        8,1          1713236 /usr/lib64/libnss_sss.so.2;5ae2fcc0 │
│  avahi-dae Oº511º avahi  DEL       REG        8,1          1390813 /usr/lib64/...                      │
│  avahi-dae Oº511º avahi    0r      CHR        1,3      0t0    1028 /dev/null                           │
│  avahi-dae Oº511º avahi    1u     unix 0xff222c00      0t0   20500 socket                              │
│  avahi-dae Oº511º avahi    3u     unix 0xffb84400      0t0   18699 /var/run/avahi-daemon/socket        │
│  avahi-dae Oº511º avahi    7w     FIFO        0,8      0t0   20324 pipe                                │
│  avahi-dae Oº511º avahi   11r  a_inode        0,9        0    7017 inotify                             │
│  avahi-dae Oº511º avahi   12u     IPv4      21553      0t0     UDP *:mdns                              │
│  avahi-dae Oº511º avahi   13u     IPv4      21554      0t0     UDP *:44720                             │
│  avahi-dae Oº511º avahi   14u  netlink                 0t0   21555 ROUTE                               │
│  ...                                                                                                   │
└────────────────────────────────────────────────────────────────────────────────────────────────────────┘

┌──────────────────────────────────────┬───────────────────────────────────────────┐
│ºLIST PROCESSES USING ANY FILE IN ETCº│                                           │
├──────────────────────────────────────┘                                           │
│$ sudo lsof O*/etc/*                                                              │
│(output can be similar ...)                                                       │
│COMMAND     PID      USER   FD   TYPE DEVICE SIZE/OFF      NODE NAME              │
│avahi-dae   511     avahi  cwd    DIR    8,1       67 101274283 O*/etc/*avahi     │
│avahi-dae   511     avahi  rtd    DIR    8,1       67 101274283 O*/etc/*avahi     │
│java      41043 azureuser  296r   REG    8,1      393       154 O*/etc/*os-release│
│java      41043 azureuser  297r   REG    8,1      393       154 O*/etc/*os-release│
│...                                                                               │
└──────────────────────────────────────────────────────────────────────────────────┘

┌──────────────────────────┬───────────────────────────┐ ┌─────────────────────────────────────┬───────────────────────────────┐
│ºMONITOR FILE/DIR. ACCESSº│                           │ │ºAUDIT LAST ACCESS/EXECUTION OF FILEº│                               │
├──────────────────────────┘                           │ ├─────────────────────────────────────┘                               │
│@[https://linux.die.net/man/1/inotifywait]            │ │$*stat*/usr/bin/sort                                                 │
│Ex:ºWait for changes, then execute someCommandº       │ │  File: /usr/bin/sort                                                │
│LIST_OF_FILES_TO_MONITOR="file1 file2 ..."            │ │  Size: 144016          Blocks: 288     IO Block: 4096   regular file│
│while  true ; do                                      │ │Device: fd00h/64768d    Inode: 263476   Links: 1                     │
│ ºinotifywaitº-q -e modify ${LIST_OF_FILES_TO_MONITOR}│ │Access: (0755/-rwxr-xr-x)  Uid: (    0/ root)   Gid: (    0/    root)│
│  someCommandToExecute                                │ │Context: system_u:object_r:bin_t:s0                                  │
│done                                                  │ │Access: 2019-05-18 15:00:03.242672510 -0400                          │
│                                                      │ │Modify: 2018-11-07 10:14:42.000000000 -0500                          │
│See also                                              │ │Change: 2019-01-17 08:14:01.789349117 -0500                          │
│@[https://linux.die.net/man/1/inotifywatch]           │ │ Birth: -                                                            │
│  (gather filesystem access statistics)               │ │                                                                     │
└──────────────────────────────────────────────────────┘ │For an executable access will show when it was last executed         │
                                                         └─────────────────────────────────────────────────────────────────────┘
Block Storage
REF: @[https://opensource.com/article/18/11/partition-format-drive-linux]

Linux(UNIX) sess storage devices like block-devices:
- read and write data is done fixed-size blocks. (4096 bytes or more, ussually)
- Memory RAM is used to cache disk data automatically to avoid slow but
  frequent accesses.
- block Read/write is done to random places (vs serialized/ordered access).
  Moving to random places is still slower (except for SSD disks).

   $ºlsblkº - ← list attached block devices:
   Example Output:
01 →  NAME              MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
02 →  sda                 8:0    0 447,2G  0 disk                         
03 →  └─sda1              8:1    0 223,6G  0 part  /                   
04 →  └─sda2              8:1    0 223,6G  0 part  /var/backups        
05 →  sdb                 8:16   0 931,5G  0 disk                      
06 →  └─sdb1              8:17   0   900G  0 part                      
07 →    └─md1             9:1    0 899,9G  0 raid1 /home               
08 →  sdc                 8:32   0 931,5G  0 disk                      
09 →  └─md1               9:1    0 899,9G  0 raid1 /home               
10 →  ...                                                                  
      ^^^^^^^                                      ^^^^^^^^^^^^^^             
      Device name assigned                         Visible File─system path  
      by Linux kernel.                             (mount─point) to apps     
      Only system.admin will                       and users.                
      care about it, usually                                              
      through a virtual path on                                           
      the file─system like /dev/sda1,...                                  
    
      Line 02-04: Disk ussually are "big". Partitions are used to make better         
      use of them. In this case disk "sda" is split into partition sda1 and sda2
  
      Access to Block devices is done indirectly through the File-system.
                                                                                                                            Block-Device
                                                                                                                            ────────────
│Human│ N←─→ M │Applications│N←─→ 1│ User─space│    1←───→ 1     │Linux │       1← ───────→ N   │Linux         │ N←─────→ M SSD Disk
│Users│        │            │      │ Filesystem│                 │Kernel│                       │Filesystem    │            Magnetic Disk
                   ^                    ^                           ^                           │implementation│            RAID of 2+ disks
                   |                    │                           │                               ^                       IP conection
                - Shell,           - Isolate Apps                 Takes care of                     │                       Device Mapper
                - explorer GUI       from the internals           all complexities                  │                       ...
                - DDBB               of blocks device.            of FS implementa.               - ext4 (standard)
                - ...              - Apps will see files          and concurrent                  - xfs  (high load)
                                     distributed in a tree        access to a physical            - fsfs (flash memory)
                                     of parent/children           disk by different apps          - nfs  (remore network fs)
                                     directories.                                                 
                                   - i-nodes
                                   - symbolic links 
                                     (if supported by implemen.)
                                   - FileºCacheº                                                  -ºBlock buffersº
                                          ^^^^^───────────     vmstat will show  ─────────────────────────^^^^^^^
                                                            realtime cache/buffers 
                                                    - Kernel tries to cache as much user-space data as possible
                                                      and just enough buffers for predicted "next-reads" to block devices

   ºNOTE:º Some advanced applications like Databases can directly claim access to the block-device
           skiping kernel control and  taking ownership of the device. This block-device
           will not be visible to the file-system or accesible to any other
           application. System. Admins call also skip the standard filesystem and access the 
           block-device directly through the special /dev/ paths. (but is discouraged 99% of the times)



Setup Disk Summary $ sudoºpartedº\ ← ºSTEP 1) Partitioning a disk (optional but recomended)º /dev/sdc \ ← Physical disk --align opt \ ← let 'parted' find optimal start/end point mklabel msdos \ ← creates partition table (==disk label). 'msdos' or 'gpt' are very compatible/popular labels 0 4G ← start/end of partition. Can be slightly shifted/tunned to adjust to the underlying disk technology due to the '--align opt' $ sudo mkfs.ext4 \ ← ºSTEP 2) Create a filesystemº -n PicturesAndVideos \ - ext4 is a popular filesystem. Recomended for desktops and small/medium servers. /dev/sdc1 - xfs is prefered for "big" systems with many apps running concurrently - fsfs is prefered for Flash-based systems. $ sudoºmountº\ ← ºSTEP 3) Mount it. Add also to /etc/fstab to persist on rebootsº /dev/sdc1 \ ← Partition -t auto \ ← auto-detect type /opt/Media ← Visible path to apps/users in file-system For RAID systems system.admin first create a virtual RAID device /dev/md0 composed of many disks (/dev/sda, /dev/sdb, ...). The STEP 1 is done the virtual RAID device
Annotated /etc/fstab $ cat /etc/fstab | nl - → 1 /dev/mapper/fedora-root / ext4 defaults 1 1 → 2 UUID=735acb4c-29bc-4ce7-81d9-83b778f6fc81 /boot ext4 defaults 1 2 → 3 LABEL=backups /mnt/backups xfs defaults 1 2 → 4 /dev/mapper/fedora-home /home ext4 defaults 1 2 → 5 /dev/mapper/fedora-swap swap swap defaults 0 0 → 6 PARTUUID=8C208C30-4E8F-4096-ACF9-858959BABBAA /mnt/ddbb xfs defaults 1 2 → 6 "some network attached device" /var/.../ xfs defaults,nofail*3 1 2 └───────────────────┬───────────────────┘ └─────┬────┘ └──┬──┘ └───┬───┘ │ │ ↓ ↓ ↓ ↓ │ │ partition device (dev/...), mount point in FS CSV mount options │ │ partition Universal Unique ID (UUID) or FS tree hierchy type for the FS type. │ │ partition LABEL or PARTUUID (GPT) Different FSs can │ │ identifies the partition. have different │ │ UUID *2 are prefered to /dev/sd... mount opts. plus │ │ like /dev/sdb3 since the device name common ones to │ │ can change for USB or plugable devices all FS *1 │ │ │ │ this flag determines the │ │ FS check order during boot: used by dump(8) to determine ←─────┘ │ (0 disables the check) which ext2/3 FS need backup │ - root (/) should be 1 (default to 0) │ - Other FSs should be 2 │ ↑ │ └────────────────────────────────────────────────────────────────────┘ *1 defaults mount options common to all File System types: rw : Read/write vs 'ro' (read only) suid : Allow set-user-ID | set-group-ID bits (nosuid: Ignore them) dev : Interpret character or block special devices exec : Permit execution of binaries auto : nouser : Do NOT allow ordinary user to mount the filesystem. async : Do not force synchronous I/O to file system. (WARN: sync may cause life-cycle shortening in flash drives) *2 Next command list the UUIDs of partitions: $ blkid → /dev/sda1: LABEL="storage" UUID="60e97193-e9b2-495f-8db1-651f3a87d455" TYPE="ext4" → /dev/sda2: LABEL="oldhome" UUID="e6494a9b-5fb6-4c35-ad4c-86e223040a70" TYPE="ext4" → /dev/sdb1: UUID="db691ba8-bb5e-403f-afa3-2d758e06587a" TYPE="xfs" ... ^^^^ tags the filesystem actually (vs the partition itself) *3 TODO: Differences between nobootwait and nofail: nofail: allows the boot sequence to continue even if the drive fails to mount. On cloud systems it ussually allows for ssh access in case of failure
Device Mapper ┌────────────────┬─────────────────────────────────────────────────────────────────────────────┐ │ºDEVICE MAPPER º│ │ ├────────────────┘ │ │ @[https://en.wikipedia.org/wiki/Device_mapper] │ │ - kernel framework mapping virtual block devices │ │ to one (or more) physical block device │ │ - Optionally can process and filter in/out data │ │ ┌───────────────────────────────────────────────┬───────────────────────────────┐ │ │ │ºDMSETUP — LOW LEVEL LOGICAL VOLUME MANAGEMENTº│ │ │ │ ├───────────────────────────────────────────────┘ │ │ │ │@[https://linux.die.net/man/8/dmsetup] │ │ │ │CY⅋P from @[https://wiki.gentoo.org/wiki/Device-mapper] │ │ │ │""" │ │ │ │Normally, users rarely use dmsetup directly. The dmsetup is a very low level. │ │ │ │LVM, mdtool or cryptsetup is generally the preferred way to do it, │ │ │ │ as it takes care of saving the metadata and issuing the dmsetup commands. │ │ │ │ However, sometimes it is desirable to deal with directly: │ │ │ │sometimes for recovery purposes, or to use a target that han't yet been ported │ │ │ │to LVM. │ │ │ │""" │ │ │ └───────────────────────────────────────────────────────────────────────────────┘ │ │ ┌──────────┬──────────────────────────────┐ ┌──────────┬─────────────────────────────────┐ │ │ │ºEXAMPLESº│ │ │ºFEATURESº│ │ │ │ ├──────────┘ │ ├──────────┘ │ │ │ │- Two disks may be concatenated into one │ │- The device mapper "touch" various layers │ │ │ │ logical volume with a pair of linear │ │ of the Linux kernel's storage stack. │ │ │ │ mappings, one for each disk. │ │ │ │ │ │ │ │- Functions provided by the device mapper │ │ │ │- crypt target encrypts the data passing │ │ include linear, striped and error mappings,│ │ │ │ through the specified device, │ │ as well as crypt and multipath targets. │ │ │ │ by using the Linux kernel's Crypto API.│ └────────────────────────────────────────────┘ │ │ └─────────────────────────────────────────┘ │ │ │ │ ┌───────────────────────────────────────────────┬─────────────────────────────────┐ │ │ │ºKERNEL FEATURES AND PROJECTS ARE BUILT ON TOPº│ │ │ │ ├───────────────────────────────────────────────┘ │ │ │ │Note: user-space apps talk to the device mapper via ºlibdevmapper.so º │ │ │ │ which in turn issues ioctls to the /dev/mapper/control device node. │ │ │ │ │ │ │ │- cryptsetup : utility to setup disk encryption based on dm-crypt │ │ │ │- dm-crypt/LUKS : mapping target providing volume encryption │ │ │ │- dm-cache : mapping target providing creation of hybrid volumes │ │ │ │- dm-integrity : mapping target providing data integrity, either │ │ │ │ using checksumming or cryptographic verification, │ │ │ │ also used with LUKS │ │ │ │- dm-log-writes : mapping target that uses two devices, passing through │ │ │ │ the first device and logging the write operations performed │ │ │ │ to it on the second device │ │ │ │- dm-verity : validates the data blocks contained in a file system │ │ │ │ against a list of cryptographic hash values, developed as │ │ │ │ part of the Chromium OS project │ │ │ │- dmraid(8) : provides access to "fake" RAID configurations via the │ │ │ │ device mapper │ │ │ │- DM Multipath : provides I/O failover and load-balancing of block devices │ │ │ │ within the Linux kernel │ │ │ │ │ │ │ │ - allows to configure multiple I/O paths between server nodes │ │ │ │ and storage arrays(separate cables|switches|controllers) │ │ │ │ into a single mapped/logical device. │ │ │ │ │ │ │ │ - Multipathing aggregates the I/O paths, creating a new device │ │ │ │ that consists of the aggregated paths. │ │ │ │ │ │ │ │- Docker : uses device mapper to create copy-on-write storage for │ │ │ │ software containers │ │ │ │ │ │ │ │- DRBD : Distributed Replicated Block Device │ │ │ │ │ │ │ │- kpartx(8) : utility called from hotplug upon device maps creation and │ │ │ │ deletion │ │ │ │- LVM2 : logical volume manager for the Linux kernel │ │ │ │ │ │ │ │- Linux version of TrueCrypt │ │ │ └─────────────────────────────────────────────────────────────────────────────────┘ │ │ ┌──────────────────────────────────┬───────────────────────────────────────────────────────┐ │ │ │ºDEVICE─MAPPER LOGICAL-TO-TARGET:º│ │ │ │ ├──────────────────────────────────┘ │ │ │ │ºMAPPED DEVICEº │ ºMAPPING TABLEº │ºTARGET DEVICEº │ │ │ │ (LOGICAL DRIVE) │ │ PLUGIN (INSTANCE/s) │ │ │ │ │ │ │ │ │ │ logical device provided by │ entry1: │ - filters │ │ │ │ device-mapper driver. │ mapped-device1 ←→ target-device1 │ - access physical │ │ │ │ It provides an interface to │ └─ start address └┬─ start address │ devices │ │ │ │ operate on. │ └─ sector-length │ │ │ │ │ │ entry2: │ Example plugins: │ │ │ │ Ex: │ mapped-device2 ←→ target-device2 │ - mirror for RAID │ │ │ │ - LVM2 logical volumes │ └─ start address └┬─ start address │ - linear for LVM2 │ │ │ │ - dm-multipath pseudo-disks │ └─ sector-length │ - stripped for LVM2 │ │ │ │ - "docker images" │ entry3: ^^^^^^^^^^^^^ │ - snapshot for LVM2 │ │ │ │ │ .... 1sector = 512 │ - dm-multipath │ │ │ │ │ bytes│ │ │ │ │ │ NOTE: 1 sector = 512 bytes │ │ │ │ └─────────────────────────────┴──────────────────────────────────────┴─────────────────────┘ │ │ ┌────────────┬───────────────────────────────────────────────────────────────────────┐ │ │ │ºDATA FLOW:º│ │ │ │ ├────────────┘ │ │ │ │ App → (Data) → MAPPED DEVICE → DEVICE MAPPER → TARGET-DEVICE → Physical │ │ │ │ Route to target PLUGIN instance Block Device │ │ │ │ based on: │ │ │ │ - MAPPED-DEVICE │ │ │ │ - MAPPING-TABLE │ │ │ │ │ │ │ │ Data can be also modified in transition, which is performed, for example, │ │ │ │ in the case of device mapper providing disk encryption or simulation of │ │ │ │ unreliable hardware behavior. │ │ │ └────────────────────────────────────────────────────────────────────────────────────┘ │ │ │ │ ┌───────────────────────────┬─────────────────────────────────────────────────┐ │ │ │ºAVAILABLE MAPPING TARGETSº│ │ │ │ ├───────────────────────────┘ │ │ │ │- cache : allows creation of hybrid volumes, by using solid-state drives │ │ │ │ (SSDs) as caches for hard disk drives (HDDs) │ │ │ │- crypt : provides data encryption, by using kernel Crypto API │ │ │ │- delay : delays reads and/or writes to different devices (testing) │ │ │ │- era : behaves in a way similar to the linear target, while it keeps │ │ │ │ track of blocks that were written to within a user-defined │ │ │ │ period of time │ │ │ │- error : simulates I/O errors for all mapped blocks (testing) │ │ │ │- flakey : simulates periodic unreliable behaviour (testing) │ │ │ │- linear : maps a continuous range of blocks onto another block device │ │ │ │- mirror : maps a mirrored logical device, while providing data redundancy │ │ │ │- multipath: supports the mapping of multipathed devices, through usage of │ │ │ │ their path groups │ │ │ │- raid : offers an interface to Linux kernel's software RAID driver (md) │ │ │ │- snapshot : (and snapshot-origin) used for creation of LVM snapshots, │ │ │ │ as part of the underlying copy-on-write scheme │ │ │ │- striped : stripes the data across physical devices, with the number of │ │ │ │ stripes and the striping chunk size as parameters │ │ │ │- thin : allows creation of devices larger than the underlying │ │ │ │ physical device, physical space is allocated only when │ │ │ │ written to │ │ │ │- zero : equivalent of /dev/zero, all reads return blocks of zeros, │ │ │ │ and writes are discarded │ │ │ └─────────────────────────────────────────────────────────────────────────────┘ │ └──────────────────────────────────────────────────────────────────────────────────────────────┘
NOTE: LVM can also be used to create RAID, but the approach looks to be less mature/supported.
Software RAID 0/1/2/...
 STEP 0│ D1=/dev/sda ; D2=/dev/sdb ;
       │ NAME="/dev/md0" # ← (DESIRED NAME for the array)
       │ MDADM_CREATE="sudo mdadm --create --verbose"
       │ RAID 0            │ RAID 1           │ RAID 5:           │ RAID 6/10
         ------------------+------------------+-------------------+----------
       │ $MDADM_CREATE \   │ $MDADM_CREATE \  │ $MDADM_CREATE \   │ $MDADM_CREATE \
       │ $NAME \           │ $NAME \          │ $NAME \           │ $NAME \
       │º--level=0º \      │ --level=1 \      │ --level=5 \       │ --level=6 \ (=10)
       │ --raid-devices=2  │ --raid-devices=2 │ --raid-devices=3  │ --raid-devices=4 \
       │ $D1 $D2           │ $D1 $D2          │ $D1 $D2 $D3       │ --raid-devices=4 \
       │                                      │                   │  $D1 $D2 $D3 $D4
       │                                      │                   │
       │                                      │RºWARN:ºLow perf.  │ RAID 10 admits also
       │                                      │  in degraded mode │ an extra layout arg.
       │                                      │                   │ near, far, offset
       │
       │ NOTE: For RAID 1,5..  create will take a time.
       │ To monitor the progress:(ºman 4 md, section "RAID10"º)
       │ $ cat /proc/mdstat
       │ →  Output
       │ →  Personalities : [linear] ... [raid1] ....
       │ →  md0 : active raid1 sdb[1] sda[0]
       │ →     104792064 blocks super 1.2 [2/2] [UU]
       │ →    º[==˃..........]  resync = 20.2% º
       │ →    º(212332/1047920) finish=... speed=.../secº
       │ → ...
 ──────┼────────────────────────────────────────────────────────────────────────────────────
 STEP 1│Ensure RAID was created properly
       │
       │$ cat /proc/mdstat
       │→  Personalities : [linear] [multipath] ...
       │→  md0 : active raid0 sdb[1] sda[0]
       │→        209584128 blocks super 1.2 512k chunks
       │→
       │→  ...
 ──────┼────────────────────────────────────────────────────────────────────────────────────
 STEP 2│create ext4|xfs|... filesystem
       │
       │$ sudo mkfs.ext4 -F /dev/md0
 ──────┼────────────────────────────────────────────────────────────────────────────────────
 STEP 3│Mount at will somewhere. Optionally add to /etc/fstab
       │
       │$ sudo mount /dev/md0 /var/backups
 ──────┼────────────────────────────────────────────────────────────────────────────────────
 STEP 4│Keep layout at reboot
       │
       │$ sudo mdadm --detail --scan | \                      RºRAID 5 WARN:º  :
       │  sudo tee -a /etc/mdadm/mdadm.conf                     check again to make sure the array
       │                                                        has finished assembling. Because of
       │                                                        the way that mdadm builds RAID 5,
       │                                                        if the array is still building, the
       │                                                        number of spares in the array will
       │                                                        be inaccurately reported:
 ──────┼────────────────────────────────────────────────────────────────────────────────────
 STEP 5│update initramfs, to make RAID available early at boot process
 (OPT.)│
       │$ sudo update-initramfs -u
 ──────┴────────────────────────────────────────────────────────────────────────────────────

Mirror existing disk with data
This section covers the tipical case:

  INITIAL SCENARIO            →  DESIRED FINAL SCENARIO
  ────────────────────────────┼────────────────────────────────────────────────
BºDisk 1ºWithOºImportant dataº│BºDisk 1º, part of Qºnew RAID 1ºwithOºMirrowed Important dataº
  (/dev/sda1)Oº──────────────º│                                    Oº───────────────────────º
GºDisk 2ºNew  Disk            │GºDisk 2º, part of Qºnew RAID 1ºwithOºMirrowed Important dataº
  (/dev/sdb) Oº──────────────º│                                    Oº───────────────────────º

 Some care must be taken to avoid loosing the data in the RAID creation procedure.
Add mirror to existing disk without deleting data:
 ──────┬────────────────────────────────────────────────────────────────────────────────────
 STEP 1│Create ºincomplete RAID 1º with missing disks:
       │$ mdadm --create --verbose
       │     /dev/md0
       │     --level=1
       │     --raid-devices=2 Gº/dev/sdbº ºmissingº
 ──────┼────────────────────────────────────────────────────────────────────────────────────
 STEP 2│ Format partition:
       │ $ mkfs.ext4 /dev/md0
 ──────┼────────────────────────────────────────────────────────────────────────────────────
 STEP 3│ CopyOºImportant dataºfrom Bºexisting diskº to new array:
       │
       │ $ sudo mount /dev/md0 /mnt/newarray
       │ $ tar -C Bº/mnt/disk1WithDataº-cf - | tar -C Qº/mnt/newarray/º -xf -
       │ RºWARN:º - Check that there are no errors in the execution. Maybe sudo is needed
       │          - Inspect visually the content of /mnt/newarray and get sure it contains
       │            all ourOºImportant dataº before continuing. Any other tests are welcome.
 ──────┼────────────────────────────────────────────────────────────────────────────────────
 STEP 4│ add original disk to disk array
       │
       │ $mdadm /dev/md0 --add Bº/dev/sda1º ← RºWARN:º if STEP 3 fails or is skipped
       │                                             OºImportant dataº will be LOST!!!
 ──────┼────────────────────────────────────────────────────────────────────────────────────
 STEP 5│Keep layout at reboot
       │
       │$ sudo mdadm --detail --scan | \
       │  sudo tee -a /etc/mdadm/mdadm.conf
 ──────┴────────────────────────────────────────────────────────────────────────────────────

Freing RAID Resources
ºPRE-SETUP (OPTIONAL) RESETTING EXISTING RAID DEVICESº
 ────────────────────────────────────────────────────

  Free physical storage devices to reassign to new data arrays.

  RºWARNING!!!!º  - any data stored will be lost
  RºWARNING!!!!º  - Backup your RAID data firt

  $ cat /proc/mdstat  ← Find any active array
    →   Output
    →   Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
    →  ºmd0 : active raid0 sdc[1] sdd[0]º
    →         209584128 blocks super 1.2 512k chunks
    →  ...
  $ sudo umount /dev/md0          ← Unmount the array
  $ sudo mdadm --stop /dev/md0    ← STOP the array
  $ sudo mdadm --remove /dev/md0  ← REMOVE the array



  $ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT ← Find the devices used to build the array
   →  Output
   →  NAME     SIZE FSTYPE            TYPE MOUNTPOINT
   →  sda      100G                   disk
   →  sdb      100G                   disk
   →  sdc      100Gºlinux_raid_memberºdisk
   →  sdd      100Gºlinux_raid_memberºdisk
   →  vda       20G                   disk
   →  ├─vda1    20G ext4              part /
   →  └─vda15    1M                   part
   →  ...
      ^^^
      WARN: /dev/sd* name can change at reboot!

  $ sudo mdadm --zero-superblock /dev/sdc  ← zero the superblock to reset to normal
  $ sudo mdadm --zero-superblock /dev/sdd  ← zero the superblock to reset to normal

  $ vim /etc/fstab
    ...
    #º/dev/md0º/var/backups ext4 defaults,nofail,discard 0 0   ←  Comment out/remove any references

  $ vim /etc/mdadm/mdadm.conf
    ...
    # ARRAY /dev/md0 metadata=1.2 name=mdadmwrite:0 UUID=7...  ←  Comment out/remove the array definition

  $ sudo update-initramfs -u   ←  Update initramfs





Partition drives
@[https://opensource.com/article/18/11/partition-format-drive-linux]
Encryption
Storage encryption
- can be performed at the file system level or the block level.
- Linux file system encryption options include eCryptfs and EncFS, while
  FreeBSD uses PEFS.
- Block level or full disk encryption options include dm-crypt + LUKS on
  Linux and GEOM modules geli and gbde on FreeBSD.

- VeraCrypt
  free open source disk encryption software for Windows/OSX/Linux.
  Developed by IDRIX, based on
  TrueCrypt 7.1a. main features:
  - Creates virtual encrypted disk within a file and mounts
    it as a real disk.
  - Encrypts entire partition or storage device
  - Encrypts a partition or drive where Windows is installed
    (pre-boot authentication).
  - Encryption is automatic, real-time(on-the-fly) and transparent.
  - Parallelization and pipelining allow data to be read
    and written as fast as if the drive was not encrypted.
  - Encryption can be hardware-accelerated on modern processors.
  - Provides plausible deniability, in case an adversary forces
    you to reveal the password: Hidden volume (steganography)
    and hidden operating system.
Storage++
ºDºistributed
ºRºeplicated
ºBºlock
ºDºevice
https://www.tecmint.com/setup-drbd-storage-replication-on-centos-7/
The DRBD (stands for Distributed Replicated Block Device) is a distributed,
flexible and versatile replicated storage solution for Linux. It mirrors the
content of block devices such as hard disks, partitions, logical volumes etc.
between servers. It involves a copy of data on two storage devices, such that
if one fails, the data on the other can be used.

- It's a high-performance + low-latency low-level building block for block
  replication.

- Another alternative is Ceph, that also offers integration with OpenStack.
  """However, Ceph's performance characteristics prohibit its
     deployments in certain low-latency use cases, e.g., as backend for
     MySQL ddbbs"
- See also DRBD4Cloud research project, aiming at ncreasing the applicability
  adn functionality for cloud markets.
  @[https://www.ait.ac.at/en/research-topics/cyber-security/projects/extending-drbd-for-large-scale-cloud-deployments/]
  """RBD is currently storing up to 32full data replicas on remote storage
     nodes. DRBD4Cloudwill allow for the usage of erasure coding, which allows
     one to split data into a number of fragments (e.g., nine), such that
     only a subset (e.g., three) is needed to read the data. This will
     significantly reduce the required storage and upstream band-width
     (e.g., by 67 %), which is important, for instance, forgeo-replication with
      high network latency."""
Ceph
distributed storage solution with unified object
and block storage capabilities.
FreeNAS
"World's #1 storage OS"
- It can be installed on nearly any hardware to turn
  it into a network attached storage (NAS) device.
- Paid, supported enterprise solutions available under TrueNAS
NAS4Free
simplest and fastest way to create a centralized
and easily-accessible server for all kinds of data.
Key features:
- ZFS file system
- software RAID (levels 0, 1 or 5)
- disk encryption.
- OS: FreeBSD
Openfiler
Unified storage solution:
NAS + SAN storage.
Features:
- high availability/failover.
- block replication
- Web-based management.
Flash Storage
Detect
false Flash
@[https://www.linuxlinks.com/essential-system-tools-f3-detect-fix-counterfeit-flash-storage/]
2019-02-08  Steve Emms
- f3: Detect and fix counterfeit flash storage:
- f3 stands for Fight Flash Fraud, or Fight Fake Flash.

- flash memory stage is particularly susceptible to fraud.
- The most commonly affected devices are USB flash drives,
  but SD/CF and even SSD are affected.
- It’s not sufficient to simply trust what df, since it
  simply relays what the drive reports (which can be fake).
- neither using dd to write data is a good test.

- f3 is a set of 5 open source utilities that detect and
  repair counterfeit flash storage.
  - test media capacity and performance.
  - test real size and compares it to what the drive says.
  - open source implementation of the algorithm used by H2testw.

ºInstallationº
$ git clone https://github.com/AltraMayor/f3.git
$ make # compile f3write,f3read
$ make install # /usr/local/bin by default
$ make extra # compile and install f3probe, f3fix, and f3brew
$ sudo make install-extra

ºUssageº
- f3write fills a drive with 1GB .h2w files to test its real capacity.
  -w flag lets you set the maximum write rate.
  -p show the progress made

- f3read: After you’ve written the .h2w files to the flash media,
  you then need to check the flash disk contains exactly the written
  files. f3read performs that checking function.

- f3probe is a faster alternative to f3write/f3read.
  particularly if you are testing high capacity slow writing media.
  It works directly over the block device that controls the drive.
  So the tool needs to be run with elevated privileges.
  It only writes enough data to test the drive.
  It destroys any data on the tested drive.

- f3fix
  Obviously if your flash drive doesn’t have the claimed specifications,
  there’s no way of ‘fixing’ that. But you can at least have the flash
  correctly report its capacity to df and other tools.
  - f3fix creates a partition that fits the actual size of the fake drive.

- f3brew
  f3brew is designed to help developers determine how fake drives work.


FSFS
(Flash FS)
@[https://www.usenix.org/conference/fast15/technical-sessions/presentation/lee]

-  A New File System for Flash Storage
(Much better than EXT4)
Experimental results highlight the desirable performance of F2FS; on a state-of-the-art mobile system, it outperforms EXT4 under synthetic workloads by up to 3.1 (iozone) and 2 (SQLite). It reduces elapsed time of several realistic workloads by up to 40%. On a server system, F2FS is shown to perform better than EXT4 by up to 2.5 (SATA SSD) and 1.8 (PCIe SSD).
Cloud Storage
SSHFS FUSE
GDrive FUSE
@[https://www.techrepublic.com/article/how-to-mount-your-google-drive-on-linux-with-google-drive-ocamlfuse/]
@[https://ask.fedoraproject.org/en/question/68813/any-good-client-for-google-drive/]
LVM
LVM Diagram:
 physical drive 1                                     logical-volume 1
 physical drive 2    N <-> 1    Volume-Group 1 <-> M  logical-volume 2
 physical drive 3                                     logical-volume 3
 ...                                                  ...
                     ^^^^^^^                 ^^^^^^^
                     The Vol.Group           The Vol.Group
                     acts as a logical       allows to dynamically
                     pool of physical        create logical volumes
                     devices                 isolated from the physical
                                             resources
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                     The Volume-Group acts as a relation
                     N-to-M between physical resources
                     and logical partitions

Quick Sumary:
    CREATE       → ADD Physical drives → Add logical-volumes
    VOLUME-GROUP   to VOLUME-GROUP       to VOLUME-GROUP.
                                         ^^^^^^^^^^^^^^^^^^
                                         logical-volumes can map to:
                                         - company deparments
                                         - test/pre/pro enviroment
                                         - ...
"Full-Journey" SETUP:
WARN: You need LVM tools installed. Mount alone is not able to mount LVM volumes

REF


ºPRE-SETUP:º
Format a Qºphysical drive /dev/sdxº to be included on the pool
# dd if=/dev/zero of=O*/dev/sdx* count=8196
# parted /dev/sdx print | grep Disk
Disk /dev/sdx: 100GB
# parted /dev/sdx mklabel gpt
# parted /dev/sdx mkpart primary 1s 100%

Note: Volume-group is synonym of "physical-storage pool"

ºSTEP 1: Create an LVM pool (and add a first physical disk to it)º
NOTE: Usually, you don't have to set up LVM at all since most distros
     defaults to creating a virtual "pool" of storage and adding your
     machine's hard drive(s) to that pool.

# vgcreate volgroup01 /dev/sdx1  # ← create new storage pool and
                                      aggregate disk-partition


ºSTEP 2: Create logical-volumesº
# lvcreate volgroup01 49G --name vol0  # ← create logical-volume /dev/volgroup01/vol0
# lvcreate volgroup01 49G --name vol1  # ← create logical-volume /dev/volgroup01/vol1


ºSTEP 4: Switch volume-group onlineº
# vgchange --activate y volgroup01

ºSTEP 5: make the file systemsº
# mkfs.ext4 -L finance    /dev/volgroup01/vol0
# mkfs.ext4 -L production /dev/volgroup01/vol1
            ^^^^^^^^^^^^^
            label the drive
            In this case a logical-volume is used by department

ºSTEP 6: Mount the volumesº
# mount /dev/volgroup01/vol0 /mnt/vol0
# mount /dev/volgroup01/vol1 /mnt/vol1

ºSTEP 7: Adding space to the volume-groupº
# part /dev/sdy mkpart primary 1s 100%  # ← create partition on new physical-disk
# vgextend volgroup01 /dev/sdy1         # ← aggregate to volgroup01
# lvextend -L +49G /dev/volgroup01/vol0 # ← Extend the already-existing logical-volume
Show LVM layout
ºvgdisplayº shows info about volume groups:
# vgdisplay
  --- Volume group ---
  VG Name               volgroup01
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               ˂237.47 GiB
  PE Size               4.00 MiB
  Total PE              60792
  Alloc PE / Size       60792 / ˂237.47 GiB
  Free  PE / Size       0 / 0
  VG UUID               j5RlhN-Co4Q-7d99-eM3K-G77R-eDJO-nMR9Yg

ºlvdisplayº shows info about logical volumes:

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/volgroup01/finance
  LV Name                finance
  VG Name                volgroup01
  LV UUID                qPgRhr-s0rS-YJHK-0Cl3-5MME-87OJ-vjjYRT
  LV Write Access        read/write
  LV Creation host, time localhost, 2018-12-16 07:31:01 +1300
  LV Status              available
  # open                 1
  LV Size                149.68 GiB
  Current LE             46511
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

[...]
Show PVs,VGs,LVs
º# pvs # ← Physical Volumesº
  PV         VG     Fmt  Attr PSize    PFree
  O*/dev/sda2*  fedora lvm2 a--  ˂222.57g    0



º# vgs # ← Volume Groups º
  VG     #PV #LV #SN Attr   VSize    VFree
  Oºfedoraº   1   3   0 wz--n- ˂222.57g    0
º# lvs # ← Logical Volumesº
  LV   VG     Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  Oºhomeº fedora -wi-ao---- ˂164.82g
  Oºrootº fedora -wi-ao----   50.00g
  Oºswapº fedora -wi-ao----    7.75g
Mount in rescue-mode
PRE-SETUP:
- the LVM toolchain must ready for use in the rescue-mode environment.
   (/usr/sbin directory mounted or similar)

STEPS:
# vgchange --activate y
(output will be similar to)
→ 2 logical volume(s) in volume group "volgroup01" now active
# mkdir /mnt/finance
# mount /dev/volgroup01/finance /mnt/finance
LVM + LUKS encryption
- LUKS stands for Linux-Unified-Key-Setup encryption toolchain.
- LVM  integrates nicely with disk encryption

LUKS encrypts full-partitions (vs files  in GnuPG, ...)

NOTICE/WARN: LUKS will prompts for a password during boot.
             (server-autoboot will fail)


ºSTEP 1: format the partition with the "cryptsetup" commandº
# cryptsetup luksFormat /dev/sdx1
→ LUKS will warn that it's going to erase your drive: (Accept to continue)
→ A prompt will ask for a passphrase: (Enter it to continue)

The partition is encrypted at this point but no filesystem is in yet:
- In order to partition it you must un-lock it.

# cryptsetup luksOpen /dev/sdx1 mySafeDrive # ← Unlock before formating it.
                                ^^^^^^^^^^^
                                human-friendly name
                                will create a symlink
                                /dev/mapper/mySafeDrive
                                to auto-generated designator

→ LUKS will ask for the passphrase to un-lock the drive: (Enter it to continue)

- Check the volume is "OK":
# ls -ld /dev/mapper/mySafeDrive
→ lrwxrwxrwx. 1 root root 7 Oct 24 03:58 /dev/mapper/mySafeDrive → ../dm-4

ºSTEP 2: format with standard-filesystem (ext4,...)º
# mkfs.ext4 -o Linux -L mySafeExt4Drive /dev/mapper/mySafeDrive

ºSTEP 3: Mount the unitº
# mount /dev/mapper/mySafeExt4Drive /mnt/hd
Increase LV Size
ºSTEP 1: Mark physical disk partition as LVM:º
# fdisk -cu /dev/sdd   # ← cfdisk offers a visual ncurses alt. to fdisk
→ new partition ("n")
  → primary partition ("p")
    → Enter partition number (1-4)
      → Change type ("t") to Linux LVM  ("8e")
        → Check status ("p")
          → Write changes ("c")


ºSTEP 2: create new PV (Physical Volume)º
# pvcreate /dev/sdd1
  → Verify the pv:
    # pvs

ºSTEP 3: Extending Volume-Groupº
# vgextend vg_tecmint /dev/sdd1
  → Verify it:
  # vgs


ºSTEP 4: Increar Logical Volume Sizeº
4.1: Check available free space in volume-group:
# vgdisplay | grep -i "Free"
 → Free  PE / Size       4607 / 18GB
                         ^^^^^^^^^^^
                         max size a logical volume
                         can be extended to.

4.2: Extend the volume:
# lvextend -l +4607 /dev/vg_tecmint/LogVol01
              ^
              Use '+' to add the more space.

4.3: Check changes:
# lvdisplay

ºSTEP 5: re-size the file-systemº
# resize2fs /dev/vg_tecmint/LogVol01
File Systems
EXT4
Features:
- metadata and journal checksums.
- timestamps intervals down to nanoseconds.
- EXT4 extents: described by its starting and ending place on the hard drive.
  EXT4-Extents make possible to describe very long, physically contiguous files in
  a single inode pointer entry significantly reducing the number of pointers in large files.
- New anti-fragmentation algorithms.

shrinking
on LVM
-a-ext4-file-system-on-lvm-in-linux/
@[https://www.systutorials.com/124416/shrinking-a-ext4-file-system-on-lvm-in-linux/]

Checks error|fragmentation
(ussually it must be really low in EXT"N" filesystems):
WARN: Be sure to use the -n flag, preventing fsck to take any action on the file-system

# fsck -fn /dev/sda
→ ...
→ ...
/dev/sda: 613676/3040000 files (º0.3% non-contiguousº), 6838740/12451840 blocks
EXT4 Journal
- for performance reasons, we do not want to write or sync every change to ext4.
- If the system crashes meanwhile, the changes that are not written to ext4 will
  be lost if Journal is not enabled.
- Every write/sync operation is written to Journal first (not to ext4 first)
  and it is finalized later (written to ext4 later). If the system crashes,
  during recovery, probably on the next boot, Journal is replied back to ext4
  so changes are applied and not lost.

Journal can be used in three different modes (mount option):

  - journal:   All data (both metadata and actual data) is written to Journal first, so the safest.
  - ordered:   This is the default mode. All data is sent to ext4, metadata is sent to Journal also.
               No protection for data but metadata is protected for crash.
  - writeback: Data can be written to ext4 before or after being written to Journal.
               On a crash, new data may get lost.

The information / blocks is written to Journal following next sequence:

 1.- A Descriptor Block is written, containing the information about the
     final locations of this operation.
 2.- A Data Block is written.(real data or meta data)
 3.- A Commit Block is written. After this, the data can be sent to ext4.
     (Alternatively a Revocation Block will cancel)
     If commit-block is not found, when a replay happens (crash-recovery, ...),
     data will not be written to ext4.
Tunning performance
REF: Kernel.org doc
   Contains full list of mount opts, /proc&/sys entries

Mount options:
journal_async_commit    Commit block can be written to disk without waiting
            for descriptor blocks. If enabled older kernels cannot
            mount the device. This will enable 'journal_checksum'
            internally.


commit=nrsec    (*) Ext4 can be told to sync all its data and metadata
            every 'nrsec' seconds. The default value is 5 seconds.
            This means that if you lose your power, you will lose
            as much as the latest 5 seconds of work (your
            filesystem will not be damaged though, thanks to the
            journaling).  This default value (or any low value)
            will hurt performance, but it's good for data-safety.
            Setting it to 0 will have the same effect as leaving
            it at the default (5 seconds).
            Setting it to very large values will improve
            performance.

inode_readahead_blks=n  This tuning parameter controls the maximum
            number of inode table blocks that ext4's inode
            table readahead algorithm will pre-read into
            the buffer cache.  The default value is 32 blocks.

stripe=n    Number of filesystem blocks that mballoc will try
            to use for allocation size and alignment. For RAID5/6
            systems this should be the number of data
            disks *  RAID chunk size in file system blocks.

min_batch_time=usec This parameter sets the commit time (as
            described above) to be at least min_batch_time.
            It defaults to zero microseconds.  Increasing
            this parameter may improve the throughput of
            multi-threaded, synchronous workloads on very
            fast disks, at the cost of increasing latency.
XFS
@[https://www.systutorials.com/docs/linux/man/8-xfs_admin/]

SEE ALSO
mkfs.xfs(8), mount(8), xfs_db(8), xfs_growfs(8), xfs_repair(8), xfs(5).
Stratis
@[https://opensource.com/article/18/4/stratis-lessons-learned]
@[https://stratis-storage.github.io/StratisSoftwareDesign.pdf]
Linux FS Hierarchy
Full FS Hierarchy
@[https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/index.html]
/etc
@[http://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/etc.html]
SElinux
Ext.Refs
- [Selinux WiKi@GitHub],
- [Setools WiKi@GitHub],
- [CIL     WiKi@GitHub],

- [Ref.Policy],
- [(Book) SELinux by Example]
- [(Book) The SELinux Notebook - The Foundations]
- [Stop disabling SELinux
Summary
┌──────────────────────────┬──────────────────────────────────────┐┌─────────────────────┬────────────────────────────────────────┐
│ºSE─LINUX GLOBAL SUMMARYº:│                                      ││ DAC+MAC FLOW SUMMARY│                                        │
│──────────────────────────┘                                      │├─────────────────────┘                                        │
│Bº  KERNEL º                                                     ││ USER          ┌─────────→ ┌─────────┐                        │
│BºOBJECTS CLASSESº                                               ││ SPACE         │ ┌───────→ │ PROCESS │ ←─────────────────┐    │
│  ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑                                               ││               │ │ ┌─────→ │"SUBJECT"│                   │    │
│Represent type─of─resources                                      ││               │ │ │       │ºtype1º  │                   │    │
│handled by kernel(vs app)  Each type─of─resource has a set of    ││               │ │ │       └─────────┘                   │    │
│that must be protected    ºHARDCODEDº list of Gºactionsº defined ││               │ │ │      º1)º  ↓                        │    │
│by Mandatory access Rules  for each se─linux kernel object class ││ ───────────── │ │ │ ───── system─call ───────────────── │ ───│
│  ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓       ↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓   ││ KERNEL        │ │ │            │☝request(hardcoded)     │    │
│BºFile        classº    ←→ [append, create, lock,         ... ]  ││ SPACE         │ │ │            │Gºaction1ºover targeted │    │
│GºProcess     classº    ←→ [dyntransition, ptrace, fork,  ... ]  ││               │ │ │            │object (labeled with    │    │
│BºINET Socket classº    ←→ [bind, read, ...               ... ]  ││               │ │ │            │SEContext Bºtype2º)     │    │
│Bºlogin       classº    ←→ [action1, action2, ...         ... ]  ││               │ │ │      º2)º  ↓                        │    │
│Bºuser        classº    ←→ [action1, action2, ...         ... ]  ││               │ │ │          Lookup                     │    │
│   ...                                                           ││               │ │ │           data                  º8)º│Sys.│
│                                                                 ││               │ │ │            │            ┌────────┐  │call│
│─TheGºProcess objectsºdiffer from the rest in the sense that     ││               │ │ │ RºKOºº3)º  ↓            │target  │  │Res.│
│they are the "active" kernel object triggering a new system call,││               │ │ └─────────  Error    ┌────│object  ├──┘    │
│probably as a result of a user intention to access some other    ││               │ │            Checks    │    │Bºtype2º│       │
│resource (kernel object instance in SELinux parlance)            ││               │ │              │       │    └────────┘       │
│                                                                 ││               │ │              │GºOKº  │º7)º                 │
│─BºFileºalike objects (Files, directories,...) must be first     ││               │ │        º4)º  ↓       │GºOKº:exec ºgaction1º│
│ "manually" labelled through extended file─system attributes.    ││               │ │   RºKOº     DAC      │on target object     │
│ Other object labelling is done by kernel automatically          ││               │ └───────── permission  │                     │
│                                                                 ││               │              checks    │                     │
│ºKERNEL─OBJECT─INSTANCE CONTEXTº                                 ││               │RºKOº           │       │                     │
│                                                                 ││               │Rº(audit)ºº7)º  ↓       │                     │
│ ┌── SELinux  context ────┐                                      ││               └─────────────  LSM  ────┘                     │
│  user:role:"Oºtypeº":level                                      ││                              hooks                           │
│ ☝                                                               ││                          º5)º │ ^ º6)º                       │
│ ─ ºAllº Kernel Object  ºinstancesº areºlabeledºwith             ││             ºtype1º is allowed│ │GºOKº      ☜ OºMACº         │
│ a 4─tuple ºSELINUX─CONTEXTº (user:role:type:level)              ││                Gºaction1º over│ │RºKOº        OºTYPEº        │
│                                                                 ││                      Bºtype2º?↓ │             OºENFORCEMENTº │
│ ─The Oºtypeº is the core data used for MACOºTYPE─ENFORCEMENTº   ││                           ┌───────────┐                      │
└─────────────────────────────────────────────────────────────────┘│                           │ Security  │                      │
┌────────────────────────────┬───────────────────────────────┐     │                           │Enhance(SE)│                      │
│ºINITIAL PRE─SETUP SUMMARY:º│                               │     │                           │  Server   │                      │
├────────────────────────────┘                               │     │                           └───────────┘                      │
│ Define SE-Linux System policy with a flow similar to:      │     │                               ☝                              │
│                                                            │     │        The server will first query the AVC(ache) and return  │
│ create modules → load into → Init bitmap─"matrix"          │     │        OK/KO if a match is found.                            │
│                  kernel      context1─to─[context2,action] │     │        If nothing is found the server will:                  │
│                              ↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑↑ │     │        1) work over the AV─hash table for matching rules     │
│                                        Access Vector  (AV) │     │        2) calculate the result,                              │
└────────────────────────────────────────────────────────────┘     │        3) put the result back in AVC(ache)                   │
                                                                   │        4) Return with OK/KO                                  │
                                                                   └──────────────────────────────────────────────────────────────┘
┌──────────────────────────┬───────┐ ┌─────────────────────┬───────────────────────────────────────────────────────┐
│ ºSELinux bootup sequenceº│       │ │*/etc/selinux/config*│                                                       │
├──────────────────────────┘       │ ├─────────────────────┘                                                       │
│ kernel start →                   │ │SELINUX=enforcing    ← ─ ºenforcing º (Enables MAC)                          │
│ load policy into memory          │ │                       ─ ºpermissiveº (Use for tracing/debugging new apps)   │
│   (organized in modules)         │ │                       ─ "some other value that you must never use"          │
│                                  │ │SELINUXTYPE=targeted ← ─ ºtargeted  º Apply MAC to targeted processes        │
│ºsemoduleº command manages the    │ │                                      (listening for      remote connections)│
│install, remove, reload, upgrade, │ │                         ºminimum   º Subset of targeted:                    │
│enable or disable fo modules      │ │                                      Only selected processes are protected. │
│Ex. lists modules currently loaded│ │                         ºmls       º Multi Level Security protection.       │
│~ sudo semodule ─l                │ │                                      (Rarely used)                          │
│→abrt       1.2.0                 │ └─────────────────────────────────────────────────────────────────────────────┘
│→accountsd  1.0.6                 │
│...                               │
└──────────────────────────────────┘
┌─────────────────────────────────┐
│QUERY SOURCE TO TARGET RELATIONS:│
├─────────────────────────────────┴─────────────────────────────────────────────────────┐
│#~ sesearch --allow \                  ← Show allowed                                  │
│   --source Oºhttpd_t            º                                                     │
│   --target Oºhttpd_sys_content_tº                                                     │
│   --class  Bºfile               º     ← kernel selinux object class                   │
│ → Found 4 semantic av rules:                                                          │
│ → allow Oºhttpd_t httpd_sys_content_tº : Bºfileº Gº{ ioctl read getattr lock open }º ;│
│ → allow Oºhttpd_t httpd_content_type º : Bºfileº Gº{ ioctl read getattr lock open }º ;│
│ → allow Oºhttpd_t httpd_content_type º : Bºfileº Gº{ ioctl read getattr lock open }º ;│
│ → allow Oºhttpd_t httpdcontent       º : Bºfileº Gº{ ioctl read write create .... }º ;│
└─────────────────────────────────┴─────────────────────────────────────────────────────┘

┌────────────────────────────────────────┬──────┐ ┌─────────────┬────────────────────────────────────┐
│ºDISPLAYING SELINUX CONTEXT ATTRIBUTES:º│      │ │ºsestatus(8)º│                                    │
├────────────────────────────────────────┘      │ ├─────────────┘                                    │
│- Use "─Z" flag to shell commands              │ │ Return various status info, such as enforcing    │
│┌────────────────────────────────────────────┐ │ │ mode, current policy version and name, ...       │
││FILE SYSTEM OBJECT (files, dirs,..)         │ │ │ $ sudo ºsestatusº                                │
││$ºls ─ldZº                                  │ │ │ → SELinux status:                 enabled        │
││  /usr/sbin/httpd   → Oºhttpd_exec_t       º│ │ │ → SELinuxfs mount:                /sys/fs/selinux│
││  /var/www/html/    → Oºhttpd_sys_content_tº│ │ │ → SELinux root directory:         /etc/selinux   │
││  /etc/apache2/     → Oºhttpd_config_t     º│ │ │ → Loaded policy name:             targeted       │
││  /var/log/httpd/   → Oºhttpd_log_t        º│ │ │ → Current mode:                   permissive     │
││  /etc/init.d/httpd → Oºhttpd_initrc_exec_tº│ │ │ → Mode from config file:          error (Success)│
│├────────────────────────────────────────────┤ │ │ → Policy MLS status:              enabled        │
││SUBJECTS (running processes)                │ │ │ → Policy deny_unknown status:     allowed        │
││$ºps axZ │ grep [h]ttpdº                    │ │ │ → Max kernel policy version:      28             │
││unconfined_u:system_r:Oºhttpd_ºt:s0 ...     │ │ └──────────────────────────────────────────────────┘
│├────────────────────────────────────────────┤ │ ┌────────────────┬────────────────────────────────┐
││SOCKET                                      │ │ │showºAVC stats:º│                                │
││$ sudo netstat ─tnlpZ │ grep httpd          │ │ ├────────────────┘                                │
││....  unconfined_u:system_r:Oºhttpd_ºt:s0 ..│ │ │$ sudOºavcstatº                                  │
│├────────────────────────────────────────────┤ │ │lookups    hits  misses  allocs reclaims   frees │
││PORT                                        │ │ │6688846 5637360 1051486  051486   968960 1050979 │
││$ sudo semanage port ─l │ grep http         │ │ └─────────────────────────────────────────────────┘
││http_cache_port_t  tcp 3128,8080, ...       │ │ ┌──────────────────────┬────────────────────────┐
││  ...                                       │ │ │ SHELL SCRIPTS SUPORT:│                        │
││Oºhttp_port_tº       tcp 80, 443            │ │ ├──────────────────────┘                        │
│└────────────────────────────────────────────┘ │ │ºgetenforce(8)º     Returns status like:       │
└───────────────────────────────────────────────┘ │                    "permissive" │ "enforcing" │
                                                  │                                               │
                                                  │ºselinuxenabled(1)º exits with 0 if enabled    │
                                                  │                               1 if not        │
                                                  └───────────────────────────────────────────────┘
Nomenclature
┌─────────────────┬────────┬────────────────────────────────────────────────────────────────────────┐
│ TERMINOLOGY     │ACRONYM │    DESCRIPTION                                                         │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Access  Vector  │AV      │ - bit map representing a set of permissions such as open, read, ...    │
│                 │        │ - Each policy defines a different AV.                                  │
│                 │        │ - Actually is implemented as a hash table where the key is the tuple   │
│                 │        │   (source-type, targeted-type, targeted-kernel-object-class)           │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Access Vector   │AVC     │ ─ SELinux Security  Server can take a time to calculate                │
│ Cache           │        │   access decissions based on SE─rules.                                 │
│                 │        │ ─ The AVC stores such decissions to speed up following                 │
│                 │        │   access avoiding to recompute.                                        │
│                 │        │ ─ two AVCs exists:                                                     │
│                 │        │   ─ 1.kernel AVC caching decisions from Security Server                │
│                 │        │       on behalf of kernel based object managers.                       │
│                 │        │   ─ 2.userspace AVC built into libselinux that caches                  │
│                 │        │       decisions when SELinux─aware applications use                    │
│                 │        │       avc_open(3) with avc_has_perm (3) or avc_has_perm_noaudit(3)     │
│                 │        │       function calls saving kernel calls after first                   │
│                 │        │       decision has been made.                                          │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Bell─La Padula  │BLP     │                                                                        │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Common  Criteria│CC      │                                                                        │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Common          │CIL     │                                                                        │
│ Intermediate    │        │                                                                        │
│ Language        │        │                                                                        │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Discretionary   │DAC     │                                                                        │
│ Access          │        │                                                                        │
│ Control         │        │                                                                        │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ SELinux         │        │ ─ consists of one or more processes associated                         │
│ Domain          │        │   to the type component of a Security Context.                         │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Flux  Advanced  │FLASK   │ ─ See Flux  Research  Group  (http://www.cs.utah.edu/flux/)            │
│ Security Kernel │        │   μ─kernel Environment (Fluke)                                         │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Linux Security  │LSM     │ ─ framework providing hooks into kernel components                     │
│ Module          │        │   (e.g. disk, net─services,...) used by                                │
│                 │        │   security modules (SELinux, ....) to perform                          │
│                 │        │   access control checks.                                               │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Mandatory Access│MAC     │ ─ access control mechanism enforced by the system,                     │
│ Control         │        │   e.g. 'hard─wiring' the OS and applications or                        │
│                 │        │   via policies enforced by the administrator.                          │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Multi─Category  │MCS     │                                                                        │
│ Security        │        │                                                                        │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Multi─Level     │MLS     │ ─ Based on Bell─La Padula model for                                    │
│ Security        │        │   confidentiality in that (for example) a                              │
│                 │        │   process running at a 'Confidential' level                            │
│                 │        │   can read / write at their current level but                          │
│                 │        │   only read down levels or write up levels.                            │
│                 │        │ ─ "Today" it is more commonly used for                                 │
│                 │        │   application separation utilising the                                 │
│                 │        │   Multi─Category Security variant.                                     │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ SELinux Policy  │        │ - Set of (thousands of) rules that define the type-enforcement rules   │
│                 │        │   in the AV bitmap matrix/hashtable                                    │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Object  Manager │OM      │ ─ Userspace&kernel components responsible                          │
│                 │        │   management (object labeling, creation, access, destruction)          │
│                 │        │   of SELinux object under their control.                               │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Security        │SID     │                                                                        │
│ Identifier      │        │                                                                        │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Simplified      │SMACK   │                                                                        │
│ Mandatory       │        │                                                                        │
│ Access Control  │        │                                                                        │
│ Kernel          │        │                                                                        │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Super─user      │SUID    │                                                                        │
│ Identifier      │        │                                                                        │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Type Enforcement│TE      │ ─ set of rules declared in Policy describe                             │
│                 │        │   how the domain will interact with objects                            │
│                 │        │ ─ In practice: the AV bit─map used to check                            │
│                 │        │   where type1 is allowed actionN over type2                            │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ User  Identifier│UID     │                                                                        │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ X(window) Access│XACE    │                                                                        │
│ Control         │        │                                                                        │
│ Extension       │        │                                                                        │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Security Server │        │   A sub─system in the Linux kernel that makes access decisions         │
│                 │        │   and computes security contexts based on Policy on behalf of          │
│                 │        │   SELinux─aware applications and Object Managers.                      │
│                 │        │   The Security Server does not enforce a decision, it merely           │
│                 │        │   states whether the operation is allowed or not according to the      │
│                 │        │   Policy. It is  the  SELinux─aware  application  or Object            │
│                 │        │   Manager responsibility to enforce the decision.                      │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Security Context│        │   An SELinux Security Context is a variable length string that         │
│                 │        │   consists  of  the  following  mandatory  components                  │
│                 │        │   user:role:type and an optional [:range] component.                   │
│                 │        │   Generally abbreviated  to 'context', and sometimes  called a         │
│                 │        │   'label'.                                                             │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Security        │SID     │   SIDs are unique opaque integer values mapped by the kernel           │
│ Identifier      │        │   Security Server and userspace AVC that represent a Security Context. │
│                 │        │   The SIDs generated by the kernel Security Server are u32             │
│                 │        │   values that are  passed via the Linux Security Module                │
│                 │        │   hooks to/from the kernel Object Managers.                            │
│─────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
│ Type Enforcement│        │   SELinux makes use of a specific style of type enforcement            │
│                 │        │   (TE) to enforce Mandatory Access Control. This is where all          │
│                 │        │   subjects and objects have a type identifier associated to them       │
│                 │        │   that can then be used to enforce rules laid down by                  │
│                 │        │   Policy                                                               │
│ ────────────────┼────────┼────────────────────────────────────────────────────────────────────────│
Object Classes
ºSELinux Kernel Object Classes:º
Reminder:
  - For nearly every linux object that must be protected there is a matching SELInux kernel class.
  - This class has a defined set of hardcoded actions that will be allowed/denied by a running policy.
  - Type enforcement will determine which "source"-context has which list of allowed-actions over
    a given class labeled with a "target"-context
  - Next follows a summary of classes and actions extracted from
https://github.com/SELinuxProject/refpolicy/blob/master/policy/fl ask/access_vectors
┌────────────────────────────┐                                                          ┌──────────────────────────┐
│ºFILE─ALIKE RELATED OBJECTSº│                                                          │ºNETWORK─RELATED OBJECTS.º│
├────────────────────────────┴───────────────────────────────────────────────────────┐  ├──────────────────────────┴────────────────────────────────────────────────────┐
│º│common file │class dir      │class file       │class lnk_file │class chr_file    º│  │ º│common socket           │class socket             │class rawip_socket º     │
│ │            │inherits file  │inherits file    │inherits file  │inherits file      │  │  │                        │ inherits socket         │inherits socket          │
│ │            │               │                 │               │                   │  │  │                       º│class netlink_socketº    │                         │
│ │ioctl       │add_name       │execute_no_trans │open           │execute_no_trans   │  │  │# inherited from file   │inherits socket          │node_bind                │
│ │read        │remove_name    │entrypoint       │audit_access   │entrypoint         │  │  │ioctl                  *│class packet_socket                                │
│ │write       │reparent       │execmod          │execmod        │execmod            │  │  │read                    │inherits socket         º│class unix_stream_socketº│
│ │create      │search         │open             │               │open               │  │  │write                  º│class key_socketº        │inherits socket          │
│ │getattr     │rmdir          │audit_access                     │audit_access       │  │  │create                  │inherits socket          │                         │
│ │setattr     │open           │                                 │                   │  │  │getattr                º│class unix_dgram_socketº │connectto                │
│ │lock        │audit_access                                                         │  │  │setattr                 │inherits socket          │newconn                  │
│ │relabelfrom │execmod                                                              │  │  │lock                                              │acceptfrom               │
│ │relabelto   │                                                                     │  │  │relabelfrom            º│class tcp_socket  │class node    │class netifº     │
│ │append                                                                            │  │  │relabelto               │inherits socket   │              │                 │
│ │map        º│class sock_file   │class fifo_file   │class fdº º│class blk_fileº    │  │  │append                  │                  │tcp_recv      │tcp_recv         │
│ │unlink      │inherits file     │inherits file     │           │inherits file      │  │  │map                     │connectto         │tcp_send      │tcp_send         │
│ │link        │                  │                  │use        │                   │  │  │# socket─specific       │newconn           │udp_recv      │udp_recv         │
│ │rename      │open              │open              │           │open               │  │  │bind                    │acceptfrom        │udp_send      │udp_send         │
│ │execute     │audit_access      │audit_access                  │audit_access       │  │  │connect                 │node_bind         │rawip_recv    │rawip_recv       │
│ │swapon      │execmod           │execmod                       │execmod            │  │  │listen                  │name_connect      │rawip_send    │rawip_send       │
│ │quotaon     │                  │                              │                   │  │  │accept                                     │enforce_dest  │dccp_recv        │
│ │mounton                                                                           │  │  │getopt                 º│class udp_socketº │dccp_recv     │dccp_send        │
│ │                                                                                  │  │  │setopt                  │inherits socket   │dccp_send     │ingress          │
└────────────────────────────────────────────────────────────────────────────────────┘  │  │shutdown                │                  │recvfrom      │egress           │
                                                                                        │  │recvfrom                │node_bind         │sendto                          │
                                                                                        │  │sendto                  │                                                   │
                                                                                        │  │recv_msg                                                                    │
                                                                                        │  │send_msg                                                                    │
                                                                                        │  │name_bind                                                                   │
                                                                                        └───────────────────────────────────────────────────────────────────────────────┘

┌──────────────────────────┐ ┌─────────────────────┐ ┌───────────────────┐  ┌──────────────┐
│ ºPROCESS─RELATED OBJECTSº│ │ ºCAPABILITY RELATEDº│ │ºSYSTEM OPERATIONSº│  │ºOTHERSº      │
├──────────────────────────┤ ├──────────────────┬──┘ ├───────────────────┤  ├──────────────┴───────────────────────────────────────────────────────────────────┐
│                          │ │                  │    │                   │  │º│common ipc       │common database    │common x_device         │class filesystemº│
│ º│class processº         │ │º│common capº     │    │º│class systemº    │  │ │                 │                   │                        │                 │
│  │                       │ │ │                │    │ │                 │  │ │create           │create             │/*pointer,keyboard*/    │mount            │
│  │fork                   │ │ │chown           │    │ │ipc_info         │  │ │destroy          │drop               │getattr                 │remount          │
│  │transition             │ │ │dac_override    │    │ │syslog_read      │  │ │getattr          │getattr            │setattr                 │unmount          │
│  │sigchld                │ │ │dac_read_search │    │ │syslog_mod       │  │ │setattr          │setattr            │use                     │getattr          │
│  │sigkill                │ │ │fowner          │    │ │syslog_console   │  │ │read             │relabelfrom        │read                    │relabelfrom      │
│  │sigstop                │ │ │fsetid          │    │ │module_request   │  │ │write            │relabelto          │write                   │relabelto        │
│  │signull                │ │ │kill            │    │ │module_load      │  │ │associate        │                   │getfocus                │transition       │
│  │signal                 │ │ │setgid          │    │ │halt             │  │ │unix_read                            │setfocus                │associate        │
│  │ptrace                 │ │ │setuid          │    │ │reboot           │  │ │unix_write                           │bell                    │quotamod         │
│  │getsched               │ │ │setpcap         │    │ │status           │  │ │                                     │force_cursor            │quotaget         │
│  │setsched               │ │ │linux_immutable │    │ │start            │  │                                       │freeze                                    │
│  │getsession             │ │ │net_bind_service│    │ │stop             │  │                                       │grab                                      │
│  │getpgid                │ │ │net_broadcast   │    │ │enable           │  │                                       │manage                                    │
│  │setpgid                │ │ │net_admin       │    │ │disable          │  │                                       │list_property                             │
│  │getcap                 │ │ │net_raw         │    │ │reload           │  │                                       │get_property                              │
│  │setcap                 │ │ │ipc_lock        │    └───────────────────┘  │                                       │set_property                              │
│  │share                  │ │ │ipc_owner       │                           │                                       │add                                       │
│  │getattr                │ │ │sys_module      │                           │                                       │remove                                    │
│  │setexec                │ │ │sys_rawio       │                           │                                       │create                                    │
│  │setfscreate            │ │ │sys_chroot      │                           │                                       │destroy                                   │
│  │noatsecure             │ │ │sys_ptrace      │                           │                                                                                  │
│  │siginh                 │ │ │sys_pacct       │                           │                                                                                  │
│  │setrlimit              │ │ │sys_admin       │                           │ ºOTHERSº                                                                         │
│  │rlimitinh              │ │ │sys_boot        │                           │    security, capability, X─Windows, Netlink, D─BUS,  nscd, IPSEC, dccp,memprotect│
│  │dyntransition          │ │ │sys_nice        │                           │    db_database/db_table/db_column/..., (network) peer, tun_socket, binder        │
│  │setcurrent             │ │ │sys_resource    │                           │    infiniband*, ...                                                              │
│  │execmem                │ │ │sys_time        │                           └──────────────────────────────────────────────────────────────────────────────────┘
│  │execstack              │ │ │sys_tty_config  │
│  │execheap               │ │ │mknod           │
│  │setkeycreate           │ │ │lease           │
│  │setsockcreate          │ │ │audit_write     │
│  │getrlimit              │ │ │audit_control   │
└──────────────────────────┘ │ │setfcap         │
                             └──────────────────┘
FS-LABELING
- When a new file-alike object is created its contexts
  is copied (by default) from its parent's directory
  by default.
- This behaviour can be modified with a type_transition rule
  in the policy.

$ sudOºchconº--type Oºvar_tº index.html  ← Changes context ºtemporarelyº
                                                         (FS relabel will revert changes)

$ sudOºrestoreconº -v index.html
$ sudOºrestoreconº reset index.html                 ← similar to fixfiles(8) suited
  unconfined_u:object_r:Oºvar_tº:s0 →                 for individual file or dir. relabeling
  unconfined_u:object_r:Oºhttpd_sys_content_tº:s0

OTHERS:
ºfixfiles(8)     º  Relabels FS objects. By default
                    relabel all mounted FSs that support
                    SELinux unless mounted with the
                    context mount option, automatically
                    detemining the file sec.ctx specs
                    to use for the labeling.

ºgenhomedircon(8)º  Script for generating correct file ctx
                    specs for user's home directories.

See also:[[Troubleshooting+restorecon?]]


NOTE: SELinux context for remote FS can be specified ºat mount timeº.

ºVerify a file context against file_context(.local) ddbb (active policy) º
#~ºmatchpathconº  -V /www/html/index.html #
                  ^^

                  In case of mismatch an error similar to next one will be displayed:
                  /www/html/index.html has context unconfined_u:object_r:default_t:s0,
                                         should be     system_u:object_r:httpd_sys_content_t:s0
login objects
- SELinux users are not created with a command,
  nor does they have its own login access to the server.
- SELinux users areºdefined in the policyºloaded into
  memory at boot time, andºthere are only a few of these usersº
- Standard Linux Users are mapped to SElinux users upon login
  according to selinux-login-objects defined by policy and
  customizable through semanage login.

                          ┌──────────────────────────┐
                          │ Show 1─to─N relationship:│
                          ├──────────────────────────┴─────────────────────────────────────────────┐
                          │ #~ºsemanage user ─lº                                                   │
                          │ (Example CentOS 7 output)                                              │
                          │ →                 Labeling  ...                                        │
                          │ → SELinux User    Prefix    ... SELinux Roles                          │
                          │ →                                                                      │
                          │ → guest_u         user      ... guest_r                                │
                          │ → root            user      ... staff_r sysadm_r system_r unconfined_r │
                          │ → staff_u         user      ... staff_r sysadm_r system_r unconfined_r │
                          │ → sysadm_u        user      ... sysadm_r                               │
                          │ → system_u        user      ... system_r unconfined_r                  │
                          │ → unconfined_u    user      ... system_r unconfined_r                  │
                          │ → user_u          user      ... user_r                                 │
                          │ → xguest_u        user      ... xguest_r                               │
                          └─────────────────┬──────────────────────────────────────────────────────┘
                                            │
  ┌──────────┐          ┌────────────┐      ↓     ┌─────┐   ┌────────────────────┐
  │Linux User│← N─to─1 →│SELinux user│ ← 1─to─N → │roles│ → │process domain (*_t)│
  └──────────┘    ↑     └────────────┘      ↑     └─────┘ ↑ └────────────────────┘
                  │                         │             └─ A role "access a process domain" when policy grants it.
               ┌──┴──────────────────────┐  └─────────────── A user "enters a roll" when policy grants it.
               │show N─to─1 relationship:│
               ├─────────────────────────┴──────────────────────────────────────┐
               │#~ºsemanage login ─lº                                           │
               │(Example CentOS 7 output)                                       │
               │→ Login Name           SELinux User MLS/MCS Range        Service│
               │→                                                               │
               │→ __default__          unconfined_u s0─s0:c0.c1023       *      │
               │→ root                 unconfined_u s0─s0:c0.c1023       *      │
               │→ system_u             system_u     s0─s0:c0.c1023       *      │
               │→ ^^^^^^^^^^^          ^^^^^^^^^^^^ ^^^^^^^^^^^                 │
               │→ ^ linux user 1 ←→ 1  SE user      Multi Level/Category Sec.   │
               └────────────────────────────────────────────────────────────────┘

┌─────────────────────────┐
│ºsemanage login [opts]º  │
├─────────────────────────┴──────────────────────────────────────────┐
│map linux user to selinux user upon login by                        │
│Adding/modifying/deleting(and listing) login object types           │
│                                                                    │
│semanage login [opts]                                               │
│        º──add      º ─s "SEUSER" "linuxUser" [─r "MLC/MCS range"]  │
│        º──modify   º ─s "SEUSER"             [─r "MLC/MCS range"]  │
│        *──delete   "linuxUser"                                     │
│        º──deleteallº  # (all = local/non─policy─defined)           │
│        º──extract  º  # ← (for use within TX)                      │
│        º──list     º [──locallist]                                 │
│                       ^^^^^^^^^^^                                  │
│                       show local                                   │
│                       (non─defined─by─policy)                      │
│                       customizations                               │
│OTHER OPTIONS:                                                      │
│  ──noreload       ← Do NOT reload policy after commit              │
│  ──store STORE    ← Select alternate SELinux Policy Store          │
└────────────────────────────────────────────────────────────────────┘

Examples:
┌──────────────────────┐                          ┌──────────────────────────┐
│ºRestricting su/sudo º│                          │ºdisable script executionº│
├──────────────────────┴───────────────────────┐  ├──────────────────────────┴────────────────────┐
│ [regularuser@localhost ~]$ºid ─Zº            │  │By default, SELinux allows users mapped to the │
│ºunconfined_u:unconfined_r:unconfined_t:s0º   │  │guest_t account to exec $HOME/* scripts        │
│                                              │  │                                               │
│ regularuser@localhost ~]$ su ─ switcheduser  │  │#~ºgetseboolºBºallow_guest_exec_contentº       │
│ Password: XXXX                               │  │→ guest_exec_content ─  → hon                  │
│ → [switcheduser@localhost ~]$                │  │                                               │
│                                              │  │[guestuser@localhost ~]$ ~/myscript.sh         │
│ #~ºsemanage login ─a ─s user_u regularuserº  │  │→ This is a test script                        │
│             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  │  │                                               │
│             Remap linux_user to selinux user │  │#~ºsetseboolºBºallow_guest_exec_contentº off   │
│                                              │  │[guestuser@localhost ~]$ ~/myscript.sh         │
│ [regularuser@localhost ~]$ºid ─Zº            │  │→ bash: myscript.sh:RºPermission deniedº       │
│ºuser_u:user_r:user_t:s0º                     │  └───────────────────────────────────────────────┘
│                                              │
│ [regularuser@localhost ~]$ su ─ switcheduser │
│ Password: XXXX                               │
│ → su:RºAuthentication failureº               │
└──────────────────────────────────────────────┘
Process Context-Switch
- Kernel takes care of automatic process-context-labeling once
  a policy has been loaded.

- Next commands can be used to alter or display the process labeling:

  ºruncon(1)º   run command with given ctx (user, role and domain)

  ºsecon(1) º   See ctx from a file|program|user-input

  ºnewrole(1)º  creates new shell running with new sec.ctx.
                Use must specify new role and/or type.
                type is derived from role if not specified.

  ºrun_init(9)º Runs initrc script using the sec.ctx.
                found in current policy's ctx/initrc_context
                file. Ussually used to restart system
                services in new intended domain

RºPROCESS LABELING FLOW AT LOGIN:º TODO
- Login process sets a default context (unconfined for targeted policy)
  Policy defined Context transitions will change children process context
  at runtime.

ºPROCESS CONTEXT TRANSITIONº
- A default context is given upon login defined by the
  policy login-objects (see [[semanage login?]] )

- Context Transtion Flow:

                     app_1            → Execute /usr/bin/app_2 → app_2

  domain/type      Bºsrc_tº                   Oºapp_exec_tº    Gºtarget_tº
                      ↑                          ↑
  Requirements   1)- policy must define     2)- policy must define
                     exec permission to       Oºapp_exec_tº as entrypoint
                   Oºapp_exec_tº entry-point    to Gºtarget_tº
                 3)- policy must allow
                     transition from Bºsrc_tº
                     to Gºtarget_tº


  Check 1):            Check 2):            Check 3):
  #~ sesearch          #~ sesearch          #~ sesearch
     -s Bºsrc_tº          -s Gºtarget_tº       -s Bºsrc_tº
     -t Oºapp_exec_tº     -t ºtarget_tº        -t Gºtarget_tº
     -c file              -c file              -c process       ← (selinux-kernel)class
     -p execute           -p entrypoint        -p transition    ← action requite for class
     -Ad                  -Ad                  -Ad                (See [[selinux object classes?]])

Booleans
man 8 booleans
- Allow to customize a given policy at runtime.
- Common to-many-apps actions that can be allowed/denied are "grouped"
  into booleans
  Ex:
  - """ Do we allow  the ftp server access to home directories? """
  - """ Can httpd use mod_auth_ntlm_winbind ? """
  - ...
┌───────────────────┐                               ┌─────────────────┐
│ºList Booleans Setº│                               │ºChange Booleansº│
├───────────────────┴─────────────────────────────┐ ├─────────────────┴──────────────────────────────┐
│ ─ Take a look at the "booleans.local" file under│ │$ sudOºgetsebool -aº         ← Show all booleans│
│   /etc/selinux/targeted/modules/active/         │ │                                                │
│   Ex:                                           │ │$ sudOºsetseboolº \          ← Set boolean      │
│   # This file is auto─generated by libsemanage  │ │  ─P "mySELinuxBoolean" 0│1                     │
│   # Do not edit directly.                       │ │  ☝(opt.flag.)                                  │
│                                                 │ │  Persist reboot                                │
│   httpd_read_user_content=1                     │ │                                                │
│   httpd_enable_homedirs=1                       │ │$ sudo ºtoggleseboolº \      ← toggle 1⇿0 value │
└─────────────────────────────────────────────────┘ │ ─P "mySELinuxBoolean"                          │
                                                    └────────────────────────────────────────────────┘

Ex@Stackoverflow: Fix Apache DNS problem:
$ sudOºsetseboolº -P nis_enabled 0
$ sudOºsetseboolº -P httpd_can_network_connect 1
Troubleshooting
┌─────────────────────────────────┐                                  ┌──────────────────────────────┐
│ºQ: What an SELinux error means?º│                                  │ºBUG FIXING GENERAL PROCEDUREº│
├─────────────────────────────────┴────────────────────────────────┐ ├──────────────────────────────┴───────────────────────────────┐
║ A: It can means:                                                 ║ │ ºSTEP 01: Check /var/log/messagesº for messages like         │
║ ┌────────────────────────────────────┐                           ║ │  ...                                                         │
║ │ºA.1: file─system labeling is wrongº│                           ║ │  ...  setroubleshoot: SELinux is preventing /usr/bin/httpd \ │
║ ├────────────────────────────────────┴────────────────────────┐  ║ │       from getattr access on the directory /home/fred.     \ │
║ │Fix examples:                                                │  ║ │       *For complet SELinux messages, run                   \ │
║ │Ex 1:                                                        │  ║ │       sealert ─l 37acc7d8─e955─4333─123a─1d027dbcea72*       │
║ │  # sudo ºchconº ──reference /var/www/html ..../index.html   │  ║ │                                                              │
║ │Ex 2:                                                        │  ║ │ ºSTEP 02: Run the indicated commandº                         │
║ │  #ºrestoreconº─vR /var/www/html                             │  ║ │  ~# sealert ─l 37acc7d8─e955─4333─123a─1d027dbcea72          │
║ │    ^^^^^^^^^^                                               │  ║ │  → SELinux is preventing /usr/sbin/httpd from search access \│
║ │    uses info from                                           │  ║ │    on the directory /home/fred                               │
║ │    /etc/selinux/targeted/contexts/files/file_contexts, ...  │  ║ │  → ...                                                       │
║ │    to determine what a file or dir's ctx should be          │  ║ │  → OºDo                                     º                │
║ │Ex 3:                                                        │  ║ │  → Oºsetsebool ─P httpd_read_user_content 1 º                │
║ │  #ºsemanageº fcontext ─a ─e /var/www/ /my/alternative/www/  │  ║ │  → Oº...                                    º                │
║ │              ^^^^^^^^                                       │  ║ │  → Oºsetsebool ─P httpd_enable_homedirs 1   º                │
║ │     WARN: we are just defining what the context is,         │  ║ └──────────────────────────────────────────────────────────────┘
║ │           we are not writing what the                       │  ║ ┌──────────────────────────────────┐
║ │           extended attributes are. It just means:           │  ║ │ºAUDITING SELINUX ERROR MESSAGES.º│
║ │           "On relable it must look like this"               │  ║ ├──────────────────────────────────┴─────────────────────────────────────────────────────────┐
║ │  # restorecon ─vR /my/alternative/www                       │  ║ │ #~ ausearch   ─m ºavcº          ─c httpd                                                   │
║ │    ^^^^^^^^^^                                               │  ║ │    ☝              ☝                                                                        │
║ │    must be run to actually proceed with relabeling          │  ║ │    standard       filter by                                                                │
║ └─────────────────────────────────────────────────────────────┘  ║ │    linux          AVC related                                                              │
║ ┌────────────────────────────────────┐                           ║ │    audit          messaged                                                                 │
║ │ ºA.2: Policy needs to be tweakedº  │                           ║ │    framework                                                                               │
║ ├────────────────────────────────────┴────────────────────────┐  ║ │    tool                                                                                    │
║ │   ─ booleans                                                │  ║ │    (auditd daemon                                                                          │
║ │   ─ policy modules                                          │  ║ │     must be running)                                                                       │
║ └─────────────────────────────────────────────────────────────┘  ║ │ → ...                                                                                      │
║ ┌────────────────────────────────────┐                           ║ │ → time → Thu Aug 21 16:42:17 2014                                                          │
║ │ ºA.3: A bug in the policyº         │                           ║ │ → type=AVC msg=audit(1408603337.115:914): avc:  denied  { getattr } for  \                 │
║ ├────────────────────────────────────┴─────────────────────────┐ ║ │   pid=10204 comm="httpd" path="/www/html/index.html" dev="dm─0" ino=8445484 \              │
║ │  App vendors must supply policy modules for SELinux systems. │ ║ │   scontext=system_u:system_r:httpd_t:s0 \                                                  │
║ │  You must submit a ticket to the app vendor following steps: │ ║ │   tcontext=unconfined_u:object_r:default_t:s0 tclass=file                                  │
║ │─ STEP 1:                                                     │ ║ │   ☝ translates to:                                                                         │
║ │  # setenforce 0 ← change to "permissive" mode and run        │ ║ │     type=AVC: ...: avc: The message comes from the AVC log and it's an AVC event           │
║ │                   the application through all its paces      │ ║ │     denied { getattr }: The permission that was attempted and the result it got.           │
║ │                   to log ºall SELinux denialsº               │ ║ │                         In this case the get attribute operation was denied.               │
║ │─ STEP 2:                                                     │ ║ │     pid=10204           process id of the process that attempted the access.               │
║ │  From /var/log/messages Copy and Paste the proposed solution:│ ║ │     comm="httpd"        shows the process command for the pid                              │
║ │  # grep httpd /var/log/audit/audit.log │ \                   │ ║ │     path:               resource trying to be accessed.                                    │
║ │   ºaudit2allowº ─MOºquirrellocalº                            │ ║ │     dev :               device                                                             │
║ │    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                             │ ║ │     ino :               inode                                                              │
║ │    generate  new policy file                                 │ ║ │     scontext:           source security context of the process.                            │
║ │    (See also audit2why(1)                                    │ ║ │     tcontext:           target security context of the resource.                           │
║ │─ STEP 3:                                                     │ ║ │     tclass:             target resource class.                                             │
║ │  #ºsemodule ─iº Oºsquirrellocal.ppº ← Import the new module  │ ║ │                                                                                            │
║ │                                                              │ ║ │ ºsealert toolº                                                                             │
║ │─ STEP 4:                                                     │ ║ │ #~ cat /var/log/messages │ grep "SELinux is preventing"                                    │
║ │  # setenforce 1                     ← Re─enable enforcement  │ ║ │ → ...                                                                                      │
║ └──────────────────────────────────────────────────────────────┘ ║ │ → ... setroubleshoot: SELinux is preventing /usr/bin/su from using the setuid capability \ │
║                                                                  ║ │       For complete SELinux messages. run sealert ─l Oºe9e6c6d8─f217─414c─a14e─4bccb70cfbceº│
║ ┌───────────────────────────────────────────────────┐            ║ │ #~ sealert ─l Oºe9e6c6d8─f217─414c─a14e─4bccb70cfbceº                                      │
║ │RºA.4: You have been, or are being, broken into!!º │            ║ │ → SELinux is preventing /usr/bin/su from using the setuid capability.                      │
║ ├───────────────────────────────────────────────────┴──────────┐ ║ │ → ...                                                                                      │
║ │ ─ HOUSTON: We have a problem                                 │ ║ │ → Raw Audit Messages                                                                       │
║ └──────────────────────────────────────────────────────────────┘ ║ │ → type=AVC msg=audit(1408931985.387:850): avc:  denied  { setuid } for  pid=5855 \         │
╚══════════════════════════════════════════════════════════════════╝ │   comm="sudo" capability=7  scontext=user_u:user_r:user_t:s0 \                             │
                                                                     │   tcontext=user_u:user_r:user_t:s0 tclass=capability                                       │
                                                                     │ → type=SYSCALL msg=audit(1408931985.387:850): arch=x86_64 syscall=setresuid success=no \   │
                                                                     │   exit=EPERM a0=ffffffff a1=1 a2=ffffffff a3=7fae591b92e0 items=0 ppid=5739 pid=5855 \     │
                                                                     │   auid=1008 uid=0 gid=1008 euid=0 suid=0 fsuid=0 egid=0 sgid=1008 fsgid=0 tty=pts2 ses=22 \│
                                                                     │   comm=sudo exe=/usr/bin/sudo subj=user_u:user_r:user_t:s0 key=(null)                      │
                                                                     │ →                                                                                          │
                                                                     │ → Hash: su,user_t,user_t,capability,setuid                                                 │
                                                                     │ →                                                                                          │
                                                                     └────────────────────────────────────────────────────────────────────────────────────────────┘
System monit
Alert/Monitoring best-patterns
- Simple e-mail/telegram do NOT scale
  You will receive many useless notification from proper behaviour
  (correct logins, ...)

  - ºA proper monitoring system will notify only when there's a real problem:º
  - Enable messaging agent (email, telegram,...)
  - Enable cpu/mem/net/disk login inputs and (Prometheus) output.
  - Run a single Prometheus server that scrapes each endpoint every minute.
  - Add Grafana for nice graphs.
  - Add Alert Manager and set up thresholds to notify you:
    -  via OpsGenie/PagerDuty/VictorOps/Slack.
  - Check graphs once a week, unless you get notifications.

- You can "start small" and use Monit:
  - single binary that can be configured to watch stuff and react to problems.
    - Start a service when stoped.
    - Clean up when disk is full.

- "Sensu": Slightly better (and more complex) than Monit, allowing to:
  - run any script as a check
  - fix stuff if it's broken for X consecutive checks
  - alert if it's still broken Y consecutive checks.
  - Sensu can also alert to pretty much anything.
    "I wrote handlers that alert to Slack, Jira and a pager."
Glances
@[https://www.tecmint.com/glances-an-advanced-real-time-system-monitoring-tool-for-linux/]
https://pcp.io/
- Analyze systems' performance metrics in real-time or using historical data.
- Compare performance metrics between different hosts and different intervals.
  Observe trends and identify abnormal patterns.
Dynamic Tracing Tools
@[https://www.tecmint.com/bcc-best-linux-performance-monitoring-tools/]
- BCC – Dynamic Tracing Tools for Linux Performance Monitoring, Networking and More
"Tools":
  -ºsslsniffº
  @[https://github.com/iovisor/bcc/blob/master/tools/sslsniff_example.txt]
    traces the write/send and read/recv functions of OpenSSL,
    GnuTLS and NSS.  Data passed to this functions is printed as plain
    text. Useful, for example, to sniff HTTP before encrypted with SSL
  -ºtcpconnectº
  @[https://github.com/iovisor/bcc/blob/master/tools/tcpconnect_example.txt]
    traces the kernel function performing active TCP connections
    (eg, via a connect() syscall; accept() are passive connections). Some example
    output (IP addresses changed to protect the innocent):

    # ./tcpconnect
    PID    COMM         IP SADDR            DADDR            DPORT
    1479   telnet       4  127.0.0.1        127.0.0.1        23
    1469   curl         4  10.201.219.236   54.245.105.25    80
    1469   curl         4  10.201.219.236   54.67.101.145    80
    1991   telnet       6  ::1              ::1              23
    2015 ssh 6 fe80::2000:bff:fe82:3ac fe80::2000:bff:fe82:3ac 22
  -ºtcplifeº
  @[https://github.com/iovisor/bcc/blob/master/tools/tcplife_example.txt]
    summarizes TCP sessions that open and close while tracing. For example:
    # ./tcplife
    PID   COMM       LADDR           LPORT RADDR           RPORT TX_KB RX_KB MS
    22597 recordProg 127.0.0.1       46644 127.0.0.1       28527     0     0 0.23
    3277  redis-serv 127.0.0.1       28527 127.0.0.1       46644     0     0 0.28
    22598 curl       100.66.3.172    61620 52.205.89.26    80        0     1 91.79
    22604 curl       100.66.3.172    44400 52.204.43.121   80        0     1 121.38
    22624 recordProg 127.0.0.1       46648 127.0.0.1       28527     0     0 0.22
    3277  redis-serv 127.0.0.1       28527 127.0.0.1       46648     0     0 0.27
    22647 recordProg 127.0.0.1       46650 127.0.0.1       28527     0     0 0.21
    3277  redis-serv 127.0.0.1       28527 127.0.0.1       46650     0     0 0.26
    [...]
  - ttysnoop
  @[https://github.com/iovisor/bcc/blob/master/tools/ttysnoop_example.txt]
    watches a tty or pts device, and prints the same output that is
    appearing on that device. It can be used to mirror the output from a shell
    session, or the system console.

  - argdist
    @[https://github.com/iovisor/bcc/blob/master/tools/argdist_example.txt]
  - bashreadline:
  @[https://github.com/iovisor/bcc/blob/master/tools/bashreadline_example.txt]
    prints bash commands from all running bash shells on the system
  - biolatency
  @[https://github.com/iovisor/bcc/blob/master/tools/biolatency_example.txt]
    traces block device I/O (disk I/O), and records the distribution
    of I/O latency (time), printing this as a histogram when Ctrl-C is hit
  - biosnoop
  @[https://github.com/iovisor/bcc/blob/master/tools/biosnoop_example.txt]
    biosnoop traces block device I/O (disk I/O), and prints a line of output
  - biotop
  @[https://github.com/iovisor/bcc/blob/master/tools/biotop_example.txt]
    lock device I/O top, biotop summarizes which processes are
    performing disk I/O. It's top for disks.
  - bitesize
  @[https://github.com/iovisor/bcc/blob/master/tools/bitesize_example.txt]
    show I/O distribution for requested block sizes, by process name
  - bpflist displays information on running BPF programs and optionally also
    prints open kprobes and uprobes
  - cachestat
  @[https://github.com/iovisor/bcc/blob/master/tools/cachestat_example.txt]
    shows hits and misses to the file system page cache
  - cachetop
  @[https://github.com/iovisor/bcc/blob/master/tools/cachetop_example.txt]
    show Linux page cache hit/miss statistics including read and write hit % per
    processes in a UI like top.
  - capable
  @[https://github.com/iovisor/bcc/blob/master/tools/capable_example.txt]
    capable traces calls to the kernel cap_capable() function, which does security
    capability checks, and prints details for each call.
  - cpudist
  @[https://github.com/iovisor/bcc/blob/master/tools/cpudist_example.txt]
    summarizes task on-CPU time as a histogram, showing how long tasks
    spent on the CPU before being descheduled
  - cpuunclaimed
  @[https://github.com/iovisor/bcc/blob/master/tools/cpuunclaimed_example.txt]
    samples the length of the CPU run queues and determine when there are
    idle CPUs, yet queued threads waiting their turn.
  - criticalstat
  @[https://github.com/iovisor/bcc/blob/master/tools/criticalstat_example.txt]
    traces and reports occurences of atomic critical sections in the
    kernel with useful stacktraces showing the origin of them.
  - dbslower
  @[https://github.com/iovisor/bcc/blob/master/tools/dbslower_example.txt]
    traces queries served by a MySQL or PostgreSQL server, and prints
    those that exceed a latency (query time) threshold
  - dbstat
  @[https://github.com/iovisor/bcc/blob/master/tools/dbstat_example.txt]
    traces queries performed by a MySQL or PostgreSQL database process, and
    displays a histogram of query latencies.
  - dcstat
  @[https://github.com/iovisor/bcc/blob/master/tools/dcstat_example.txt]
    dcstat shows directory entry cache (dcache) statistics. For example:
  -ºdeadlockº
  @[https://github.com/iovisor/bcc/blob/master/tools/deadlock_example.txt]
    This program detects potential deadlocks on a running process. The program
    attaches uprobes on `pthread_mutex_lock` and `pthread_mutex_unlock` to build
    a mutex wait directed graph, and then looks for a cycle in this graph.
  - drsnoop
  @[https://github.com/iovisor/bcc/blob/master/tools/drsnoop_example.txt]
    While tracing, the processes alloc pages,due to insufficient memory available
    in the system, direct reclaim events happened, which will increase the waiting
    delay of the processes.
    drsnoop traces the direct reclaim system-wide, and prints various details.
  - execsnoop
  @[https://github.com/iovisor/bcc/blob/master/tools/execsnoop_example.txt]
    Traces new process
  - fileslower
  @[https://github.com/iovisor/bcc/blob/master/tools/fileslower_example.txt]
    shows file-based synchronous reads and writes slower than a threshold
  - filetop
  @[https://github.com/iovisor/bcc/blob/master/tools/filetop_example.txt]
    filetop shows reads and writes by file, with process details.
  - gethostlatency
  @[https://github.com/iovisor/bcc/blob/master/tools/gethostlatency_example.txt]
    traces host name lookup calls
  - hardirqs
  @[https://github.com/iovisor/bcc/blob/master/tools/hardirqs_example.txt]
    traces hard interrupts (irqs), and stores timing statistics
    in-kernel for efficiency
  - llcstat
  @[https://github.com/iovisor/bcc/blob/master/tools/llcstat_example.txt]
  traces cache reference and cache miss events system-wide, and summarizes
  them by PID and CPU.
  - mdflush
  @[https://github.com/iovisor/bcc/blob/master/tools/mdflush_example.txt]
  traces flushes at the md driver (kernel software RAID) level
  -ºmemleakº
  @[https://github.com/iovisor/bcc/blob/master/tools/memleak_example.txt]
    traces and matches memory allocation and deallocation requests, and
    collects call stacks for each allocation. memleak can then print a summary
    of which call stacks performed allocations that weren't subsequently freed
  - nfs...
  - offcputime, offcpu...
  @[https://github.com/iovisor/bcc/blob/master/tools/offcputime_example.txt]
    shows stack traces that were blocked, and the total duration they
    were blocked.
  - oomkill
  @[https://github.com/iovisor/bcc/blob/master/tools/oomkill_example.txt]
    simple program that traces the Linux out-of-memory (OOM) killer,
    and shows basic details on one line per OOM kill:

  -ºwakeuptimeº
  @[https://github.com/iovisor/bcc/blob/master/tools/wakeuptime_example.txt]
  measures when threads block, and shows the stack traces for the
  threads that performed the wakeup, along with the process names of the waker
  and target processes, and the total blocked time.

Python,Java,...
  -ºugcº
  @[https://github.com/iovisor/bcc/blob/master/tools/lib/ugc_example.txt]
    traces garbage collection events in high-level languages, including Java,
    Python, Ruby, and Node.
  -ºucallsº
  @[https://github.com/iovisor/bcc/blob/master/tools/lib/ucalls_example.txt]
   ucalls summarizes method calls in various high-level languages, including Java,
   Perl, PHP, Python, Ruby, Tcl, and Linux system calls.
  -ºuflowº
  @[https://github.com/iovisor/bcc/blob/master/tools/lib/uflow_example.txt]
    uflow traces method entry and exit events and prints a visual flow graph that
    shows how methods are entered and exited, similar to a tracing debugger with
    breakpoints. This can be useful for understanding program flow in high-level
    languages such as Java, Perl, PHP, Python, Ruby, and Tcl which provide USDT
    probes for method invocations.
  -ºuobjnewº
  @[https://github.com/iovisor/bcc/blob/master/tools/lib/uobjnew_example.txt]
    summarizes new object allocation events and prints out statistics on
    which object type has been allocated frequently, and how many bytes of that
    type have been allocated. This helps diagnose common allocation paths, which
    can in turn cause heavy garbage collection.
  -ºustatº
  @[https://github.com/iovisor/bcc/blob/master/tools/lib/ustat_example.txt]
    ustat is a "top"-like tool for monitoring events in high-level languages. It
    prints statistics about garbage collections, method calls, object allocations,
    and various other events for every process that it recognizes with a Java,
    Node, Perl, PHP, Python, Ruby, and Tcl runtime.
  -ºuthreadsº
  @[https://github.com/iovisor/bcc/blob/master/tools/lib/uthreads_example.txt]
    traces thread creation events in Java or raw (C) pthreads, and prints
    details about the newly created thread. For Java threads, the thread name is
    printed; for pthreads, the thread's start function is printed, if there is
    symbol information to resolve it.

  - filelife
  - fileslower
  - vfscount
  - vfsstat
  - dcstat, ...
kernel monit
Kernel Monit.Diagram
  ↑        ┌───────────────────────────────────────────────────────────────────┐
  │        │                    APPLICATIONS                                   |
  │        ├───────────────────────────────────────────────────────────────────┤
  │        │[ltrace][ldd]      System Libraries [gethostlatency]         [perf]│
  │        ├───────────────────────────────────────────────────────────────────┤
  │        │[strace][sydgid]     System Call Interface [*3]              [perf]|     CPU
[perf]   ↑ ├─────────────────┬───────┬──────────────┬───────┬──────────────────┤Interconnect  ┌──────────┐
[dtrace] │ │ VFS             │       │SOCKETS  [ss] │       │SCHEDULER   [perf]├──────────────┤CPU1[perf]├───┐
[stap]  L K├─────────────────┤       │──────────────┤       │[perf][latencytop] ←───[top]     └──────────┘   │
[lttnp] I E│ FILE SYSTEMS    │       │TCP/UPD  [*2] │       │[mpstat]          │  ╱[ps]      Memory│         │
[ktap]  N R├─────────────────┤       │──────────────┤       ├──────────────────┤ ╱ [pidstat]    BUS│  [perf] │
  │     U N│ VOLUME MANAGERS │       │IP            │       │VIRTUAL MEMORY    │╱                  │         │
  │     X E├─────────────────┤       │[ip]          │       │[vmstat]          ←                 ┌───┐       │
  │       L│ BLOCK DEVICE    │       │[route]       │       │[slabtop]         ├────────────────→│RAM│       │
  │      │ │ Interface       │       │[iptables]    │       │[free]            │                 └───┘       │
  │      │ │ [*1] [pidstat]  │       │              │       │[memleak]         │                 [numastat]  │
  │      │ │                 │       │              │       │[comkill]         │                 [lstopop]   │
  │      │ │                 │       │              │       │[slabratetop]     │                             │
  │      │ │                 │       │──────────────│       ├──────────────────┤                             │
  │      │ │                 │       │Ethernet [ss] │       │CLOCKSOURCE       │                             │
  │      │ │                 │       │[tcpdump]     │       │[/sys/...]        │                             │
  │      │ ├─────────────────┴───────┴──────────────┴───────┴──────────────────┤                             │
  │      │ │                       Device Drivers                              │                             │
  ↓      ↓ └───────────────────────────────────────────────────────────────────┘             I/O [perf]      │
                           Expander-Interconnect                           ┌──────────┐     BUS  [tiptop]    │
                   ─┬────────────────────────────────────────┬─────────────┤I/O Bridge├──────────────────────┘
                    │                                        │             └──────────┘
                    │                                        │
              ┌─────┴───────────┐                    ┌───────┴───────────┐[nicstat]
              │I/O Controller *1│                    │ Network Controller│[ss]
              └─────────────────┘                    └───────────────────┘[ip]
             ┬──────┴───────┬                         ┬──────┴────┬
             │              │                         │           │
            Disk[*1]       Swap [swapon]             Port        Port
                                                     [ping] [traceroute]
                                                     [ethtool] [snmpget]
                                                     [lldptool]

*1: [iostat] [iotop] [blktrace]
*2: [tcptop] [tcplife] [tcpconnect] [tcpaccept] [tcpconnlat] [tcpretrans]
*3: [opensnoop] [statsnoop] [syncsnoop]

OTHERS: [sar] [dstat] [/proc]

 ┌───┐[sar -m FAN]       ┌────────────┐[ipmitool]
 │FAN│                   │POWER SUPPLY│[dmidecode]
 └───┘                   └────────────┘
Tracer comparative
ltrace: Library Call Tracer
man 1 ltrace
Summary:
ºltrace                           | ltrace -c  # ← Count time and calls for each library callº
º                                                  and report a summary on program exit.     º
  [-e filter|-L]                 |   [-e filter|-L]
  [-l|--library=library_pattern] |   [-l|--library=library_pattern]
  [-x filter]                    |   [-x filter]
  [-S]                           |   [-S]
  [-b|--no-signals]              |
  [-i] [-w|--where=nr]           |
  [-r|-t|-tt|-ttt]               |
  [-T]                           |
  [-F pathlist]                  |
  [-A maxelts]                   |
  [-s strsize]                   |
  [-C|--demangle]                |
  [-a|--align column]            |
  [-n|--indent nr]               |
  [-o|--output filename]         |   [-o|--output filename]
  [-D|--debug mask]              |
  [-u username]                  |
  [-f]                           |   [-f]
  [-p pid]                       |   [-p pid]
  [[--] command [arg ...]]       |   [[--] command [arg ...]]



runs the specified command until it exits,
intercepting/recording:
   + dynamic library calls  by process
     - Display functions and funct.parameters.
     - external prototype libraries is needed
       for human-readable output.
       (ltrace.conf(5), section PROTOTYPE LIBRARY DISCOVERY )

   + signals which received by process

   + system calls           by process



strace: System call tracer
man 1 strace

SUMMARY
strace                    | strace -c  ← -c: Count time, calls, and errors
                          |                  for each system call and report summary on exit.
                          |                  -f aggregate over all forked processes
  [ -dDffhiqrtttTvVxx ]   |   [ -D ]
  [ -acolumn ]            |
  [ -eexpr ] ...          |   [ -eexpr ] ...
                          |   [ -Ooverhead ]
  [ -ofile ]              |
  [ -ppid ] ...           |
  [ -sstrsize ]           |
  [ -uusername ]          |
  [ -Evar=val ] ...       |
  [ -Evar ] ...           |   [ -Ssortby ]
                          |   [ -Ssortby ]
  [ command [ arg ... ] ] |   [ command [ arg ... ] ]


strace runs specified command until it exits intercepting:
  +  system calls called by a process
     - system-call-name + arguments + return-value is printed to STDERR (or -o file)
       Ex output:
       open("/dev/null", O_RDONLY) = 3
       open("/foo/bar", O_RDONLY) = -1 ENOENT (No such file or directory)

  +  signals    received by a process
       Ex output:
       $ strace sleep 111
       → ...
       → sigsuspend([] ˂unfinished ...˃
       → --- SIGINT (Interrupt) ---     ← Signal received
       → +++ killed by SIGINT +++

If a system call is being executed and meanwhile another one is being called
from a different thread/process then strace will try to preserve the order
of those events and mark the ongoing call as being unfinished.
When the call returns it will be marked as resumed. Ex. output:
  → [pid 28772] select(4, [3], NULL, NULL, NULL ˂ºunfinished ...˃º
  → [pid 28779] clock_gettime(CLOCK_REALTIME, {1130322148, 939977000}) = 0
  → [pid 28772] º˂... select resumed˃º )      = 1 (in [3])

Interruption of a (restartable) system call by a signal delivery is
processed differently as kernel terminates the system call and also
arranges its immediate reexecution after the signal handler completes.

read(0, 0x7ffff72cf5cf, 1)              = ? ºERESTARTSYS (To be restarted)º
--- SIGALRM (Alarm clock) @ 0 (0) ---
rt_sigreturn(0xe)                       = 0
read(0, ""..., 1)                       = 0

explain:decode error returned from strace
man 1 explain
Dstat
@[https://linux.die.net/man/1/dstat]
- versatile replacement joining the info from
  vmstat, iostat, ifstat an mpstat,
  ^ TODO what about sar(1)?
- Dstat overcomes some of the limitations and adds some extra features.
- Dstat allows you to view all of your system resources instantly,
  you can eg. compare disk usage in combination with interrupts from
  your IDE controller, or compare the network bandwidth numbers directly
  with the disk throughput (in the same interval).
mpstat-CPU stats
(ints., hypervisor...)
mpstat
Sumary
mpstat
  [ -A ]                        ==  -I ALL -u -P ALL
  [ -I { SUM | CPU | ALL } ]    ==  Report interrupts statistics
  [ -u ]                            Reports cpu utilization (default)
  [ -P { cpu [,...] | ON | ALL } ]  Indicates the processor number
  [ -V ]
  [ secs_interval [ count ] ]
    secs_interval = 0 =˃ Report from times system startup (boot)

mpstat writes to standard output activities for each available processor,
Global average activities among all processors are also reported.


- CPU output columns:
 %usr   :  executing at the user level (application).
 %nice  :  executing at the user level with nice priority.
 %sys   :  executing at the system level (kernel).
           It does NOT include time spent servicing hardware
           and software interrupts.
 %iowait:  idle during which the system had an outstanding disk I/O request.
 %irq   :  time spent by the CPU or CPUs to service hardware interrupts.
 %soft  :  time spent by the CPU or CPUs to service software interrupts.
º%steal :  time spent in involuntary wait by the virtual CPU or CPUs     º
º           while the hypervisor was servicing another virtual processor.º
 %guest : time spent by the CPU or CPUs to run a virtual processor.
 %idle  : time that the CPU or CPUs were idle and the system did not have
          an outstanding disk I/O request.
pidstat: Linux task stats
man 1 pidstat
Summary

pidstat
  [ -C comm ]       ← Display only tasks whose command name includes the string comm
  [ -d ]            ← Report I/O statistics
  [ -h ]
  [ -I ]
  [ -l ]
  [ -p { pid [,...] | SELF | ALL } ]
  [ -r ]            ← Report page-faults and memory ussage
  [ -t ]            ← Also display stats for associated threads
  [ -T { TASK | CHILD | ALL } ]
  [ -u ]
  [ -V ]
  [ -w ]
  [ secs_interval [ count ] ]

- monitor individual tasks.
- Dumps to STDOUT activities for every task selected    (-p)
   or for every        task managed by the Linux kernel (-p ALL)
   or for every active task managed by the Linux kernel ("no -p")
iostat(CPU, I/O, FS)
UUID: b5ad9545-0ba4-40f2-80cd-9e8889465afb
@[https://linux.die.net/man/1/iostat]
$ iostat -xt 2  # -x Show extended statistics
                # -t Print time
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           4.27    0.00    4.27    2.26    0.00   89.20

Device   r/s   w/s  rkB/s  wkB/s  rrqm/s  wrqm/s  %rrqm  %wrqm  r_await   w_await  aqu-sz rareq-sz wareq-sz  svctm  %util
sda     0.00  5.50   0.00  36.00    0.00    5.00   0.00  47.62     0.00     17.55    0.09     0.00     6.55   0.82   0.45
sdb     0.00  0.00   0.00   0.00    0.00    0.00   0.00   0.00     0.00      0.00    0.00     0.00     0.00   0.00   0.00
dm-0    0.00  3.00   0.00  12.00    0.00    0.00   0.00   0.00     0.00    123.00    0.37     0.00     4.00   0.50   0.15
dm-1    0.00  0.00   0.00   0.00    0.00    0.00   0.00   0.00     0.00      0.00    0.00     0.00     0.00   0.00   0.00
dm-2    0.00  7.50   0.00  30.00    0.00    0.00   0.00   0.00     0.00      0.47    0.00     0.00     4.00   0.40   0.30
        ^^^^  ^^^^   ^^^^  ^^^^^    ^^^^    ^^^^   ^^^^   ^^^^     ^^^^      ^^^^    ^^^^     ^^^^     ^^^^   ^^^^   ^^^^
        read                                                    r_await   r_await   aqu-sz                   ignore  elapsed
        req.                                                    avg msec  avg msec  avg                      to be   time %
                                                                for read  for read  queue                    removed during
                                                                requests  requests  length                           which
                                                                                    of req                           I/O req
                                                                                    issued                           were
                                                                                                                     issued
                                                                                                                     ºbandwidthº
                                                                                                                     ºussageº
  blktrace (block I/O traffic)
man 8 blktrace
perf (kernel counters)
TODO @[http://www.brendangregg.com/perf.html]
TODO @[http://www.brendangregg.com/flamegraphs.html]
- Perf: new kernel-based subsystem that provide a framework
        for all things performance analysis.
- It covers hardware level (CPU/PMU, Performance Monitoring Unit)
  features and software features (software counters, tracepoints) as well.

perf stat, gather perf-counter stats
man 1 perf-stat
SUMMARY

perf stat
     [--event=EVENT]  ← PMU event in the form:
                        - symbolic event name (perf list to list)
                        - raw PMU event (eventsel+umask) with format
                          rNNN where NNN is an hexadecimal event descriptor
     [--no-inherit]   ← child tasks do not inherit counters
     [--all-cpus]
     [--pid=]    ← comma separated list of existing processes
     [--tid=]    ← comma separated list of existing thread id
     [--scale]        ← scale/normalize counter values
     [--repeat=]   ← repeat command, print average + stddev (max: 100)
     [--big-num]      ← print large numbers with thousands-local-separator
     [--cpu=]         ← comma separated list of cpus (All if not provided)
     [--no-aggr]      ← Do not aggregate counts across all monitored CPUs
                        in system-wide mode (-a). Only valid in system-wide mode.
     [--null]         ← don't start any counters
     [--verbose]      ← show counter open errors, etc,...
     [--field-separator SEP]
     [--cgroup name]  ← monitor only the container (cgroup) called "name".
                        - Only available in per-cpu mode. The cgroup filesystem
                        must be mounted. All threads belonging to container "name"
                        are monitored when they run on the monitored CPUs.
                        - Multiple cgroups can be provided. Each cgroup is applied
                        to the corresponding event, i.e., first cgroup to first event,
                        - It is possible to provide an empty cgroup
                          (monitor all the time) using, e.g., -G foo,,bar.
                        - Cgroups must have corresponding events, i.e., they always
                          refer to events defined earlier on the command line.
     [--output file]
     [--append]
     [--log-fd]      ← append to given fd instead of stderr.
     (-)
      []
     ^^^^^^^^^
     Any command you can specify in a shell.

ºexample 1º
$ºperf statº Oº--º make -j # ← the command following the Oº--º will be traced
                   ^^^^^^^
                 traced command
(Output will be similar to)
→ Performance counter stats for 'make -j':
→
→ 8117.370256  task clock ticks     #      11.281 CPU utilization factor
→         678  context switches     #       0.000 M/sec
→         133  CPU migrations       #       0.000 M/sec
→      235724  pagefaults           #       0.029 M/sec (page faults)
→ 24821162526  CPU cycles           #    3057.784 M/sec
→ 18687303457  instructions         #    2302.138 M/sec
→   172158895  cache references     #      21.209 M/sec
→    27075259  cache misses         #       3.335 M/sec
→
→ Wall-clock time elapsed:   719.554352 msecs

ºexample 2º
$ºperf statsº-r 4 --event=cycles:{k,u} -- make -j
             ^^^^          ^     ^^^^^
             repeat        │  split into
             4 times       │ kernel/userspace
...                        │
123,123,11  cycles:k       │
  1,123,11  cycles:u       │
                           │
0.00013913  secs elapsed   │
                           │
                    - '$ºperf listº' will show all predefined
                      events (cycles, cache-misses, ...) organized
                      by hardware/software/tracepoint

ºPROFILINGº
-ºCPU Profilingº
@[https://linux.die.net/man/1/perf-top]
  $ perf top
  Samples: 12K of event 'cycles:ppp', Event count (approx.): 54543453553535
Overhead  Shared Object              Symbol
13.11%    libQT5Core.so.5.7.0        [.] QHashData:NextNode
 5.11%    libc-2.24.so               [.] _int_malloc
 2.90%    perf                       [.] symbols__insert
...

-ºSleep-time Profilingº
 See also latencytop([[99937290-9184-4d98-b590-e6f7443afc38?]])

- Syscall Pofiling:
  $ perf trace --durtion=100
   340.448 (1000.122 ms): JS Watchdog/15221 futex(uaddr: 0x7f3e9cd1a434, op: WAIT_BITSET|PRIVATE_FLAG|CLOCK_REALTIME, utime: 0x7f3eae487df0, val3: MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
   119.549 (1221.529 ms): tmux: server/2501 poll(ufds: 0x55edaa47c990, nfds: 11, timeout_msecs: 12189)            = 1
   395.984 (1000.133 ms): tuned/19297 futex(uaddr: 0x7f37a4027130, op: WAIT_BITSET|PRIVATE_FLAG|CLOCK_REALTIME, utime: 0x7f37aad37e30, val3: MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
   691.446 (1000.105 ms): JS Watchdog/15347 futex(uaddr: 0x7f6c829550b0, op: WAIT_BITSET|PRIVATE_FLAG|CLOCK_REALTIME, utime: 0x7f6c942a0df0, val3: MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
   755.478 (1000.029 ms): Timer/15227 futex(uaddr: 0x7f3eb5b5cc80, op: WAIT|PRIVATE_FLAG, utime: 0x7f3e9c2c1a60) = -1 ETIMEDOUT (Connection timed out)
   755.609 (1000.017 ms): Web Content/15215 poll(ufds: 0x7f3e9bd04760, nfds: 3, timeout_msecs: 4294967295)        = 1
   311.581 (1527.461 ms): Gecko_IOThread/15157 epoll_wait(epfd: 8˂anon_inode:[eventpoll]˃, events: 0x7f3d6d1f5200, maxevents: 32, timeout: 4294967295) = 1
   311.955 (1527.194 ms): firefox/15132 poll(ufds: 0x7f3d1ebd5610, nfds: 5, timeout_msecs: 4294967295)        = 1
   876.905 (1000.146 ms): dockerd/32491 futex(uaddr: 0x561e1da0b920, utime: 0xc42045bed8)                     = -1 ETIMEDOUT (Connection timed out)
   877.069 (1000.064 ms): dockerd/27832 futex(uaddr: 0x561e1da07950, utime: 0x7f50e7c61b90)                   = 0
   877.025 (1000.145 ms): dockerd/27904 futex(uaddr: 0xc420c82548)                                            = 0
   912.964 (1000.133 ms): JS Watchdog/15158 futex(uaddr: 0x7f3d57c4c0f0, op: WAIT_BITSET|PRIVATE_FLAG|CLOCK_REALTIME, utime: 0x7f3d65a41df0, val3: MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
   311.586 (1607.337 ms): Chrome_~dThrea/15346 epoll_wait(epfd: 11˂anon_inode:[eventpoll]˃, events: 0x7f6c9b9cd080, maxevents: 32, timeout: 4294967295) = 1
   937.245 (1000.102 ms): JS Watchdog/15276 futex(uaddr: 0x7feca361bbf0, op: WAIT_BITSET|PRIVATE_FLAG|CLOCK_REALTIME, utime: 0x7fecb4e27df0, val3: MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
   214.944 (1927.025 ms): Timer/15164 futex(uaddr: 0x7f3d6d165be0, op: WAIT|PRIVATE_FLAG, utime: 0x7f3d542a1a60) = -1 ETIMEDOUT (Connection timed out)
   215.042 (1927.063 ms): Socket Thread/15166 poll(ufds: 0x7f3d539028f0, nfds: 8, timeout_msecs: 4294967295)        = 1
  1340.624 (1000.072 ms): JS Watchdog/15221 futex(uaddr: 0x7f3e9cd1a434, op: WAIT_BITSET|PRIVATE_FLAG|CLOCK_REALTIME, utime: 0x7f3eae487df0, val3: MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
  1396.377 (1000.131 ms): tuned/19297 futex(uaddr: 0x7f37a4027130, op: WAIT_BITSET|PRIVATE_FLAG|CLOCK_REALTIME, utime: 0x7f37aad37e30, val3: MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
  1691.606 (1000.059 ms): JS Watchdog/15347 futex(uaddr: 0x7f6c829550b0, op: WAIT_BITSET|PRIVATE_FLAG|CLOCK_REALTIME, utime: 0x7f6c942a0df0, val3: MATCH_ANY) = -1 ETIMEDOUT (Connection timed out)
  1877.200 (1000.115 ms): dockerd/27844 futex(uaddr: 0x561e1da0b9a0, utime: 0xc420460ed8)                     = -1 ETIMEDOUT (Connection timed out)
   876.826 (2000.665 ms): dockerd/27840 futex(uaddr: 0xc4206d7148)                                            = 0
  1877.252 (1000.149 ms): dockerd/27832 futex(uaddr: 0x561e1da07950, utime: 0x7f50e7c61b90)                   = 0
  1877.190 (1000.239 ms): dockerd/27904 futex(uaddr: 0xc420c82548)                                            = 0
  1877.189 (1000.372 ms): dockerd/32491 futex(uaddr: 0xc420c83948)                                            = 0




- record command's profile into perf.data:
man 1 perf-record

- display recorded perf.data
man 1 perf-report

- General framework for bench.suites
man 1 perf-bench

- Analyze lock events
man 1 perf-lock
dtrace (BSD/Linux/Win/...)
stap
@[https://en.wikipedia.org/wiki/SystemTap]

SystemTap is a tool that can be used to perform live analysis of a
running program (dynamic instrumentation).  It is able to interrupt
normal control flow and execute code specified by a SystemTap script,
which can allow users to temporarily modify a running program without
having to change the source and recompile.  System administrators can
use SystemTap to extract, filter and summarize data in order to enable
diagnosis of complex performance or functional problems.

SystemTap consists of free and open-source software and includes
contributions from Red Hat, IBM, Intel, Hitachi, Oracle, and other community
members.

See real example:
@[https://developers.redhat.com/blog/2019/07/24/probing-golang-runtime-using-systemtap/]
(x)latencytop
UUID:99937290-9184-4d98-b590-e6f7443afc38
man 8 xlatencytop
Note: See also the more advanced TimeChart:
    @[http://www.linux-magazine.com/Online/News/Timechart-Zoom-in-on-Operating-System]
    @[http://blog.fenrus.org/?p=5]

- aimed at:
  - identifying and visualizing where
    (kernel and userspace) latencies are happening
  - What kind of operation/action is causing the latency


LatencyTOP focuses on the cases where the applications want to run
and execute useful code, but there's some resource that's not
currently available (and the kernel then blocks the process).

- This is done both on a system level and on a per process level,
  so that you can see what's happening to the system, and which
  process is suffering and/or causing the delays.

Ussage:
$ sudo latencytop
- press "s" followed by a letter to display active processes
  starting with that lettter.
- press "s" followed by 0 to remove the filter

See also disk write-read pending queue [[b5ad9545-0ba4-40f2-80cd-9e8889465afb?]]
latrace
man 1 latrace
LD_AUDIT 2.4+ libc frontend
Synopsis

latrace [-ltfsbcCpADaoyIiBdvTFELVh] command [arg ... ]
Description

latrace is able to run a command and display its dynamic library calls using
a LD_AUDIT libc feature (available from libc version 2.4 onward - see the
section called "DISCUSSION" ). It is also capable to measure and display
various statistics of dynamic calls.

If the config file is provided, latrace will display symbol's arguments with
detailed output for structures. The config file syntax is similar to the C
language, with several exceptions (see the section called "CONFIG").

The latrace by default fully operates inside of the traced program. However
another pipe mode is available, to move the main work to the latrace binary (
see the section called "PIPE mode").

Tunning
/proc/sys/vm/
Kernel FS
"control pannel"
 labels="tunning"
tuned
OS Tunning is done in practice through:
   - long-profiling
   - continuous-monitoring

Tunning becomes harder if system load changes frequently.

Ex.: A system with a peak of load certain hours a day
     can be tuned for performance     those known hours
     and    tuned for power-efficency the rest of day

man 8 tuned
Dynamic Adaptive system tuning daemon

- cron-friendly system service that lets to select a tuning profile
  (pre-build or custom).
- Tuning include:
  - sysctl settings (/proc/sys/)
  - settings for disk-elevators
  - power management options
  - transparent hugepages
  - custom-scripts

ºInstallº:
$ sudo dnf -y install tuned  # RedHat/Fedora/CentOS package install

=============================================
 Package                      Arch    Version
=============================================
Installing:
 tuned                        noarch  ...
Installing dependencies:
 kernel-tools-libs            x86_64  ...
 python3-perf                 x86_64  ...
 hdparm                       x86_64  ...
 python3-configobj            noarch  ...
 python3-linux-procfs         noarch  ...
 python3-schedutils           x86_64  ...
 virt-what                    x86_64  ...
Installing weak dependencies:
 kernel-tools                 x86_64  ...

$ sudo systemctl enable tuned # ← enable tuned service at boot
→ ...
$ sudo systemctl start  tuned # ← Start  tuned service now
$ sudo systemctl status tuned # ← Check  tuned service status
$ sudo systemctl status tuned
→ * tuned.service - Dynamic System Tuning Daemon
→    Loaded: loaded (/usr/lib/systemd/system/tuned.service; disabled; vendor preset: disabled)
→    Active: ºactive (running)º since Sun 2019-01-20 16:29:05 EST; 15s ago
→      Docs: man:tuned(8)
→            man:tuned.conf(5)
→            man:tuned-adm(8)
→  Main PID: 10552 (tuned)
→     Tasks: 4 (limit: 4915)
→    Memory: 15.7M
→    CGroup: /system.slice/tuned.service
→            └─10552 /usr/bin/python3 -Es /usr/sbin/tuned -l -P

ºUssage:º
$ ºtuned-adm listº   #  ← List existing tunning profiles
→ Available profiles:
→ - balanced                    - General non-specialized
→ - desktop                     - Optimize for the desktop
→ - latency-performance         - deterministic performance             (increased power consumption)
→ - network-latency             - deterministic performance low-latency (increased power consumption)
→ - network-throughput          - Optimize for streaming network throughput
                                  generally only necessary on older CPUs or
                                  40G+ networks
→ - powersave                   - Optimize for low power consumption
→ - throughput-performance      - provides excellent performance across a
                                  variety of common server workloads
→ - virtual-guest               - Optimize for running inside a virtual guest
→ - virtual-host                - Optimize for running KVM guests
→ Current active profile: balanced

$ ºsudo tuned-adm activeº  #  ← query status
Current active profile: balanced

$ ºsudo tuned-adm profile  powersaveº # ←  select profile
powertop
Allows to:
  - diagnose device/CPU power consumption issues
  - Tune/control device/CPU power management.

$ sudo powertop # ← Interactive mode if no other option is provided
$ sudo powertop --auto-tune  # ← Callibrate non-interactively
       ^^^^^^^^^^^^^^^^^^^^
       To enable at system boot add next systemd Unit:
       ºSTEP 1: Create/Edit powertop.service like:º
       $ sudoedit /etc/systemd/system/powertop.service
       (Add next lines)
     + [Unit]
     + Description=Powertop auto-tune
     +
     + [Service]
     + ExecStart=/usr/bin/powertop --auto-tune
     + RemainAfterExit=true
     +
     + [Install]
     + WantedBy=multi-user.target

       ºSTEP 2: Enable the new service like:º
       $ sudo systemctl daemon-reload
       $ sudo systemctl enable powertop
       $ sudo systemctl start powertop

       ºSTEP 3: Check it has run properlyº
       $ sudo journalctl -u powertop
     → ...
     → systemd[1]: Started Powertop auto-tune.
     → powertop[4778]: modprobe cpufreq_stats failedLoaded 0 prior measurements
     → powertop[4778]: RAPL device for cpu 0
     → powertop[4778]: RAPL Using PowerCap Sysfs : Domain Mask d
     → powertop[4778]: RAPL device for cpu 0
     → powertop[4778]: RAPL Using PowerCap Sysfs : Domain Mask d
     → powertop[4778]: Devfreq not enabled
     → powertop[4778]: glob returned GLOB_ABORTED
     → powertop[4778]: ºLeaving PowerTOPº

OTHER PERTINENT OPTIONS:
--calibrate    :  Runs  in  calibration  mode: When running on battery,
                 powertop can track power consumption as well as system
                 activity.
                  When there are enough measurements, powertop can start
                 to report  power  estimates.
-csv=file      : Generate CSV report.
-html=file     : Generate an HTML report.
-extech=$USBDEV: Use Extech Power Analyzer for analysis
                 USBDEV will be a USB adaptor similar to /dev/ttyUSB0
-iteration=$Num : Number of times to run each test.
-time[=seconds] : Generate report for specified number of seconds.
-workload=file  : Execute workload file as a part of calibration
....
ZSwap
4 low-mem
systems
  Increase system performance in systems with memory-preasure
REF

setup
ºUbuntu Setup:º
  STEP 1: Edit /etc/default/grub and add "zswap.enabled=1" option to "GRUB_CMDLINE_LINUX_DEFAULT".
    For example, a line like
    | GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
    becomes
    | GRUB_CMDLINE_LINUX_DEFAULT="quiet splash zswap.enabled=1".

  STEP 2: Update Grub configuration:
  $ sudo update-grub

  STEP 3: Reboot and the zswap module will be enabled automatically.

  STEP 4: After you reboot, check if the module is active:
  $ cat /sys/module/zswap/parameters/enabled

ºFedora&OpenSUSE Setup:º
STEP 1: Edit /etc/default/grub and add "zswap.enabled=1" option to GRUB_CMDLINE_LINUX

STEP 2: Update Grub configuration:
$ sudo grub2-mkconfig -o $(sudo find /boot/ -name grub.cfg)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                          Depending on whether your computer
                          boots from a BIOS or UEFI system
                          the path will change

STEP 3: Reboot and the zswap module will be enabled automatically.

STEP 4: After you reboot, check if the module is active:
$ cat /sys/module/zswap/parameters/enabled
(must display "Y,")

ºTunningº
-  Check adding the (grub) option "zswap.max_pool_percent=20" to see if performance increase.
   It’s recommended NOT to go above 50% since more than that can have detrimental effects
   on systems with low amounts of RAM.
Tunning Filesystem
(/etc/fstab) mount options:
noatime:
        Do not update inode access times on this filesystem (e.g. for faster access on the news  spool
        to  speed  up  news servers).  This works for all inode types (directories too), so it implies
        nodiratime.

nodiratime
       Do  not  update directory inode access times on this filesystem.  (This option is implied when
       noatime is set.)

lazytime
       Only update times (atime, mtime, ctime) on the in-memory version of the file inode.

       This mount option significantly reduces writes to the inode table for workloads  that  perform
       frequent random writes to preallocated files.

       The on-disk timestamps are updated only when:

       - the inode needs to be updated for some change unrelated to file timestamps

       - the application employs fsync(2), syncfs(2), or sync(2)

       - an undeleted inode is evicted from memory

       - more than 24 hours have passed since the i-node was written to disk.
Optimizing SSD
  REF
ºSetting disk partitionsº
SSD disks uses  4 KB blocks for reading
               512KB blocks for deleting!!!

To makes sure partitions are aligned to SSD-friendly settings:
$ sudo fdisk -H 32 -C 32 –c ....
             ^^^^^ ^^^^^
             head  cylinder
             size  size

ºSetting up Ext4 for SSD:º
 - Optimize ext4 erase blocks by ensuring that files smaller
   than 512 KB are spread across different erase blocks:
   - specify stripe-width and stride to be used. (default: 4KB)
     - alt.1: FS creation:
       $ sudo mkfs.ext4 -E stride=128,stripe-width=128 /dev/sda1
     - alt.2: existing FS:
       $ tune2fs -E stride=128,stripe-width=128 /dev/sda1

ºSetting i/o scheduler for SSDº
- Default value is Complete Fair Queueing.
  SSD benefits from the deadline scheduler:
  - Include a line like next one in /etc/rc.local:

    echo deadline ˃ /sys/block/sda/queue/scheduler

ºTrimming the data blocks from SSDº
- Trimming makes sure that when a file is removed, the data blocks
  actually do get wiped.
- Without trimming, SSD performance degrades as data blocks get
  filled up.
- Add "discard" option to /etc/fstab to enable trimming. Ex:
/dev/sda1   /     ext4     ºdiscardº,errors=remount-ro,noatime  0 1
                                                            ^^^^^^^
                                                          file access times does
                                                          not get updated every time
                                                          file is read, minimizing
                                                          writes to FS.
Integrated Admin Tools
Foreman Server Lifecycle Mng.
@[https://theforeman.org/introduction.html]
@[https://github.com/theforeman/foreman_bootdisk]
cockpit
@[https://cockpit-project.org/]
The easy-to-use, integrated, glanceable, and open web-based interface for your servers

@[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8-beta/html/managing_systems_using_the_cockpit_web_interface/getting-started-with-cockpit_system-management-using-cockpit]

The Cockpit Web interface enables you a wide range of administration tasks, including:
 - Managing services
 - Managing user accounts
 - Managing and monitoring system services
 - Configuring network interfaces and firewall
 - Reviewing system logs
 - Managing virtual machines
 - Creating diagnostic reports
 - Setting kernel dump configuration
 - Configuring SELinux
 - Updating software
 - Managing system subscriptions
Desktop
Avahi
ZeroConf Linux impl.
@[https://en.wikipedia.org/wiki/Avahi_(software)]
DBus
Docker
External Links
- @[https://docs.docker.com/]
- @[https://github.com/jdeiviz/docker-training] D.Peman@github
- @[https://github.com/jpetazzo/container.training] container.training@Github
- @[http://container.training/]
$ docker help
Usage:	docker COMMAND

A self-sufficient runtime for containers

Options:
      --config string      Location of client config files (default "/root/.docker")
  -D, --debug              Enable debug mode
  -H, --host list          Daemon socket(s) to connect to
  -l, --log-level string   Set the logging level ("debug"|"info"|"warn"|"error"|"fatal") (default "info")
      --tls                Use TLS; implied by --tlsverify
      --tlscacert string   Trust certs signed only by this CA (default "/root/.docker/ca.pem")
      --tlscert string     Path to TLS certificate file (default "/root/.docker/cert.pem")
      --tlskey string      Path to TLS key file (default "/root/.docker/key.pem")
      --tlsverify          Use TLS and verify the remote
  -v, --version            Print version information and quit

Management Commands:       | Commands:
            Manage ...     |   attach      Attach local STDIN/OUT/ERR streams to a running container
config      Docker configs |   build       Build an image from a Dockerfile
container   containers     |   commit      Create a new image from a container's changes
image       images         |   cp          Copy files/folders between a container and the local filesystem
network     networks       |   create      Create a new container
node        Swarm nodes    |   diff        Inspect changes to files or directories on a container's filesystem
plugin      plugins        |   events      Get real time events from the server
secret      Docker secrets |   exec        Run a command in a running container
service     services       |   export      Export a container's filesystem as a tar archive
swarm       Swarm          |   history     Show the history of an image
system      Docker         |   images      List images
trust       trust on       |   import      Import the contents from a tarball to create a filesystem image
            Docker images  |   info        Display system-wide information
volume      volumes        |   inspect     Return low-level information on Docker objects
                           |   kill        Kill one or more running containers
                           |   load        Load an image from a tar archive or STDIN
                           |   login       Log in to a Docker registry
                           |   logout      Log out from a Docker registry
                           |   logs        Fetch the logs of a container
                           |   pause       Pause all processes within one or more containers
                           |   port        List port mappings or a specific mapping for the container
                           |   ps          List containers
                           |   pull        Pull an image or a repository from a registry
                           |   push        Push an image or a repository to a registry
                           |   rename      Rename a container
                           |   restart     Restart one or more containers
                           |   rm          Remove one or more containers
                           |   rmi         Remove one or more images
                           |   run         Run a command in a new container
                           |   save        Save one or more images to a tar archive (streamed to STDOUT by default)
                           |   search      Search the Docker Hub for images
                           |   start       Start one or more stopped containers
                           |   stats       Display a live stream of container(s) resource usage statistics
                           |   stop        Stop one or more running containers
                           |   tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
                           |   top         Display the running processes of a container
                           |   unpause     Unpause all processes within one or more containers
                           |   update      Update configuration of one or more containers
                           |   version     Show the Docker version information
                           |   wait        Block until one or more containers stop, then print their exit codes
Install ⅋ setup
Proxy settings
To configure Docker to work with an HTTP or HTTPS proxy server, follow
instructions for your OS:
Windows - Get Started with Docker for Windows
macOS   - Get Started with Docker for Mac
Linux   - Control⅋config. Docker with Systemd
docker global info
system setup
running/paused/stopped cont.
$ sudo docker info
Containers: 23
 Running: 10
 Paused: 0
 Stopped: 1
Images: 36
Server Version: 17.03.2-ce
ºStorage Driver: devicemapperº
 Pool Name: docker-8:0-128954-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: ext4
 Data file: /dev/loop0
 Metadata file: /dev/loop1
ºData Space Used: 3.014 GBº
ºData Space Total: 107.4 GBº
ºData Space Available: 16.11 GBº
ºMetadata Space Used: 4.289 MBº
ºMetadata Space Total: 2.147 GBº
ºMetadata Space Available: 2.143 GBº
ºThin Pool Minimum Free Space: 10.74 GBº
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
ºData loop file: /var/lib/docker/devicemapper/devicemapper/dataº
ºMetadata loop file: /var/lib/docker/devicemapper/devicemapper/metadataº
 Library Version: 1.02.137 (2016-11-30)
ºLogging Driver: json-fileº
ºCgroup Driver: cgroupfsº
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc
runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe
init version: 949e6fa
ºSecurity Options:º
º seccompº
º  Profile: defaultº
Kernel Version: 4.17.17-x86_64-linode116
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.838 GiB
Name: 24x7
ID: ZGYA:L4MN:CDCP:DANS:IEHQ:XYLD:C5KG:SUL4:3XLQ:ZO6M:3RSY:V6VB
ºDocker Root Dir: /var/lib/dockerº
ºDebug Mode (client): falseº
ºDebug Mode (server): falseº
*Registry: https://index.docker.io/v1/*
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
/var/run/docker.sock
@[https://medium.com/better-programming/about-var-run-docker-sock-3bfd276e12fd]
- Unix socket the Docker daemon listens on by default,
  used to communicate with the daemon from within a container.
- Can be mounted on containers to allow them to control Docker:
$ docker runº-v /var/run/docker.sock:/var/run/docker.sockº  ....

USSAGE EXAMPLE:

# STEP 1. Create new container
$ curl -XPOSTº--unix-socket /var/run/docker.sockº \
  -d '{"Image":"nginx"}' \
  -H 'Content-Type: application/json' \
  http://localhost/containers/create
Returns something similar to:
→ {"Id":"fcb65c6147efb862d5ea3a2ef20e793c52f0fafa3eb04e4292cb4784c5777d65","Warnings":null}

# STEP 2. Use /containers//start to start the newly created container.
$ curl -XPOSTº--unix-socket /var/run/docker.sockº \
  http://localhost/containers/fcb6...7d65/start

# STEP 3: Verify it's running:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fcb65c6147ef nginx “nginx -g ‘daemon …” 5 minutes ago Up 5 seconds 80/tcp, 443/tcp ecstatic_kirch
...

ºStreaming events from the Docker daemonº

- Docker API also exposes the*/events endpoint*

$ curlº--unix-socket /var/run/docker.sockº http://localhost/events
  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  command hangs on, waiting for new events from the daemon.
  Each new event will then be streamed from the daemon.
Docker components
Docker Networks
Create new network and use it in containers:
  $ docker ºnetwork createº OºredisNetworkº
  $ docker run --rm --name redis-server --network OºredisNetworkº -d redis
  $ docker run --rm --network OºredisNetworkº -it redis redis-cli -h redis-server -p 6379

List networks:
  $ docker network ls

Disconect and connect a container to the network:
  $ docker disconnect OºredisNetworkº redis-server
  $ docker connect --alias db OºredisNetworkº redis-server
Volumes

REUSE VOLUME FROM CONTAINER:
  STEP 0: Create new container with volume
    host-mach $ docker run -it Oº--name alphaº º-v "hostPath":/var/logº ubuntu bash
    container $ date > /var/log/now

  STEP 1: Create new container using volume from previous container:
    host-mach $ docker run --volumes-from Oºalphaº ubuntu
    container $ cat /var/log/now

CREAR VOLUME FOR REUSE IN DIFFERENT CONTAINERS

  STEP 0: Create Volume
  host-mach $ docker volume create --name=OºwebsiteVolumeº
  STEP 1: Use volume in new container
  host-mach $ docker run -d -p 8888:80 \
              -v OºwebsiteVolumeº:/usr/share/nginx/html
              -v logs:/var/log/nginx nginx
  host-mach $ docker run
              -v OºwebsiteVolumeº:/website
              -w /website \
              -it alpine vi index.html

Ex.: Update redis version without loosing data:
  host-mach $ docker network create dbNetwork
  host-mach $ docker run -d --network dbNetwork \
              --network-alias redis \
              --name redis28 redis:2.8
  host-mach $ docker run -it --network dbNetwork \
              alpine telnet redis 6379
              → SET counter 42
              → INFO server
              → SAVE
              → QUIT
  host-mach $ docker stop redis28
  host-mach $ docker run -d --network dbNetwork \
              --network-alias redis \
              --name redis30 \
              --volumes-from redis28 \
              redis:3.0
  host-mach $ docker run -it --network dbNetwork \
              alpine telnet redis 6379
              → GET counter
              → INFO server
              → QUIT
docker-compose
version: "3"
services:
  web:
    build: .         # ← use Dockerfile to build image
    ports:
      - "8000:8000"
  redis:
    image: redis     # ← use DockerHub image
    volumes:
      - "redis-data:/data"

volumes:
  redis-data:
Registry
(server store for images)
Managing Containers
Boot-up/run container:
$ docker run \                             $ docker run \
  --rm  \        ←------ Remove ---------→   --rm  \
  --name clock  \        on exit             --name clock  \
 º-dº\             ← Daemon    interactive →º-tiº\
                     mode      mode
  jdeiviz/clock                              jdeiviz/clock


Show container logs:
$ docker logs docker
$ logs --tail 3
$ docker logs --tail 1 --follow

Stop container:
$ docker stop # Espera 10s docker kill

Prune stopped containers:

$ docker container prune

container help:
$ docker container
ENTRYPOINT
vs
COMMAND
Extracted from:
- @[https://docs.docker.com/engine/reference/run/#entrypoint-default-command-to-execute-at-runtime]
The ENTRYPOINT of an image is similar to a COMMAND because it specifies what
executable to run when the container starts,
ºbut it is (purposely) more difficult to overrideº.

- The ENTRYPOINT gives a container its default nature or behavior, so that when
you set an ENTRYPOINT you can run the container as if it were that binary,
complete with default options, and you can pass in more options via the COMMAND.
But, sometimes an operator may want to run something else inside the container,
so you can override the default ENTRYPOINT at runtime by using a string to specify the
new ENTRYPOINT.

*Override Entrypoint @ docker-run passing extra parameters
$ docker run -it --entrypoint /bin/bash ${DOCKER_IMAGE} -c ls -l
                 └───────┬────────────┘                 └───┬───┘
                  overrides the entrypoint             extra params.
                                                      (exec 'ls -l' script)

Monitoring running containers
Monitoring (Basic)
List containers instances:
   $ docker ps     # only running
   $ docker ps -a  # also finished, but not yet removed (docker rm ...)
   $ docker ps -lq # TODO:

"top" containers showing Net IO read/writes, Disk read/writes:
   $ docker stats
   | CONTAINER ID   NAME                    CPU %   MEM USAGE / LIMIT     MEM %   NET I/O          BLOCK I/O      PIDS
   | c420875107a1   postgres_trinity_cache  0.00%   11.66MiB / 6.796GiB   0.17%   22.5MB / 19.7MB  309MB / 257kB  16
   | fdf2396e5c72   stupefied_haibt         0.10%   21.94MiB / 6.796GiB   0.32%   356MB / 693MB    144MB / 394MB  39

   $ docker top 'containerID'
   | UID       PID     PPID    C  STIME  TTY   TIME     CMD
   | systemd+  26779   121423  0  06:11  ?     00:00:00 postgres: ddbbName cache 172.17.0.1(35678) idle
   | ...
   | systemd+  121423  121407  0  Jul06  pts/0 00:00:44 postgres
   | systemd+  121465  121423  0  Jul06  ?     00:00:01 postgres: checkpointer process
   | systemd+  121466  121423  0  Jul06  ?     00:00:26 postgres: writer process
   | systemd+  121467  121423  0  Jul06  ?     00:00:25 postgres: wal writer process
   | systemd+  121468  121423  0  Jul06  ?     00:00:27 postgres: autovacuum launcher process
   | systemd+  121469  121423  0  Jul06  ?     00:00:57 postgres: stats collector process

SysDig
Container-focused Linux troubleshooting and monitoring tool.

Once Sysdig is installed as a process (or container) on the server,
it sees every process, every network action, and every file action
on the host. You can use Sysdig "live" or view any amount of historical
data via a system capture file.

Example: take a look at the total CPU usage of each running container:
   $ sudo sysdig -c topcontainers\_cpu
   | CPU% container.name
   | ----------------------------------------------------
   | 80.10% postgres
   | 0.14% httpd
   | ...
   |

Example: Capture historical data:
   $ sudo sysdig -w historical.scap

Example: "Zoom into a client":
   $ sudo sysdig -pc -c topprocs\_cpu container. name=client
   | CPU% Process container.name
   | ----------------------------------------------
   | 02.69% bash client
   | 31.04%curl client
   | 0.74% sleep client
Dockviz
@[https://github.com/justone/dockviz]
Show a graph of running containers dependencies and
image dependencies.

Other options:
$ºdockviz images -tº
└─511136ea3c5a Virtual Size: 0.0 B
  ├─f10ebce2c0e1 Virtual Size: 103.7 MB
  │ └─82cdea7ab5b5 Virtual Size: 103.9 MB
  │   └─5dbd9cb5a02f Virtual Size: 103.9 MB
  │     └─74fe38d11401 Virtual Size: 209.6 MB Tags: ubuntu:12.04, ubuntu:precise
  ├─ef519c9ee91a Virtual Size: 100.9 MB
  └─02dae1c13f51 Virtual Size: 98.3 MB
    └─e7206bfc66aa Virtual Size: 98.5 MB
      └─cb12405ee8fa Virtual Size: 98.5 MB
        └─316b678ddf48 Virtual Size: 169.4 MB Tags: ubuntu:13.04, ubuntu:raring

$ºdockviz images -t -l º← show only labelled images
└─511136ea3c5a Virtual Size: 0.0 B
  ├─f10ebce2c0e1 Virtual Size: 103.7 MB
  │ └─74fe38d11401 Virtual Size: 209.6 MB Tags: ubuntu:12.04, ubuntu:precise
  ├─ef519c9ee91a Virtual Size: 100.9 MB
  │ └─a7cf8ae4e998 Virtual Size: 171.3 MB Tags: ubuntu:12.10, ubuntu:quantal
  │   ├─5c0d04fba9df Virtual Size: 513.7 MB Tags: nate/mongodb:latest
  │   └─f832a63e87a4 Virtual Size: 243.6 MB Tags: redis:latest
  └─02dae1c13f51 Virtual Size: 98.3 MB
    └─316b678ddf48 Virtual Size: 169.4 MB Tags: ubuntu:13.04, ubuntu:raring


$ºdockviz images -tº-i º ← Show incremental size rather than cumulative
└─511136ea3c5a Virtual Size: 0.0 B
  ├─f10ebce2c0e1 Virtual Size: 103.7 MB
  │ └─82cdea7ab5b5 Virtual Size: 255.5 KB
  │   └─5dbd9cb5a02f Virtual Size: 1.9 KB
  │     └─74fe38d11401 Virtual Size: 105.7 MB Tags: ubuntu:12.04, ubuntu:precise
  └─02dae1c13f51 Virtual Size: 98.3 MB
    └─e7206bfc66aa Virtual Size: 190.0 KB
      └─cb12405ee8fa Virtual Size: 1.9 KB
        └─316b678ddf48 Virtual Size: 70.8 MB Tags: ubuntu:13.04, ubuntu:raring

cAdvisor
+Prometheus
+Grafana
@[https://dzone.com/refcardz/intro-to-docker-monitoring?chapter=6]
@[https://github.com/google/cadvisor/blob/master/docs/running.md#standalone]
Managing Images
  Managing images
(List all image related commands with: $ docker image)

  $ docker images        # ← List local ("downloaded/instaled") images

  $ docker search redis  # ← Search remote images @ Docker Hub: 

  $ docker rmi /${IMG_NAME}:${IMG_VER}  # ← remove (local) image
  $ docker image prune                  # ← removeºallºnon used images

-ºPUSH/PULL Images from Private Registry:º

  -ºPRE-SETUP:º(Optional opinionated, but recomended)
    Define ENV. VARS. in BºENVIRONMENTº file

    $ catBºENVIRONMENTº
    #  COMMON ENV. PARAMS for PRIVATE/PUBLIC REGISTRY: {{
    USER=user01
    IMG_NAME="postgres_custom"
    IMG_VER="1.0"  # ← Defaults to 'latest'
    # }} 
    # PRIVATE REGISTRY ENV. PARAMS ONLY : {{
    SESSION_TOKEN="dAhYK9Z8..."  # ← Updated Each 'N' hours
    REGISTRY=docker_registry.myCompany.com
    # }}


  -ºUPLOAD IMAGEº 
   ºALT1: UPLOAD TO PRIVATE REGISTRY:º      │ ºALT2: UPLOAD TO DOCKER HUB:º
    $ cat push_image_to_private_registry.sh │  $ cat push_image_to_dockerhub_registry.sh
    #!/bin/bash                             │  #!/bin/bash
    set -e # ← stop on first error          │  set -e # ← stop on first error
    .BºENVIRONMENTº                         │  .BºENVIRONMENTº
                                            │  
    sudo dockerºloginº\                     │  sudo dockerºloginº\
       -u ${LOGIN_USER} \                   │     -u ${LOGIN_USER} \
       -p ${SESSION_TOKEN} \                │ 
       ${REGISTRY}                          │ 
                                            │  
    sudo dockerºpushº \                     │  sudo dockerºpushº \
       ${REGISTRY}/${USER}/\                │  /\
       /${IMG_NAME}:${IMG_VER}              │  /${IMG_NAME}:${IMG_VER}


  -ºDOWNLOAD IMAGEº 
   ºALT1: DOWNLOAD FROM PRIVATE REGISTRY:º  │ ºALT2: DOWNLOAD FROM DOCKER HUB:º
   $ docker pull \                          │ $ docker pull \
     ${REGISTRY}/${USER}/\                  │   \
     ${IMG_NAME}:${IMG_VER}                 │   ${IMG_NAME}:${IMG_VER}
Build image
$ docker build \
   --build-arg http_proxy=http://...:8080 \
   --build-arg https_proxy=https://..:8080 \
   -t figlet .

$ cat ./Dockerfile
FROM ubuntu

RUN apt-get update
# Instalar figlet

ENTRYPOINT ["figlet", "-f", "script"]
  Image tags
adding a tag to the image essentially adds an alias
The tags consists of:
    'registry_server'/'user_name'/'image_name':'tag'
    ^^^^^^^^^^^^^^^^^
    default one if not
    indicated

Tag image:
  $ docker tag jdeiviz/clock /clock:1.0
Show image
change history
   $ docker history /clock:1.0
Commit image
modifications
(Discouraged most of the time, modify Dockerbuild instead)
host-mach $ docker run -it ubuntu bash     # Boot up existing image
container # apt-get install ...            # Apply changes to running instance
host-mach $ docker diff $(docker ps -lq)   # Show changes done in running container
host-mach $ docker commit $(docker ps -lq) # Commit/Confirm changes
host-mach $ docker tag figlet              # Tage new image
host-mach $ docker run -it figlet          # Boot new image instance
Advanced Image creation
ONBUILD
(base Dockerfile
 for devel)

Modify base image adding "ONBUILD" in places that are executed just during build
in the image extending base image:
| Dockerfile.base                | Dockerfile
| FROM node:7.10-alpine          | FROM node-base
|                                |
| RUN mkdir /src                 | EXPOSE 8000
| WORKDIR /src
|
| ONBUILD ARG NODE_ENV
| ONBUILD ENV NODE_ENV $NODE_ENV
|
| COPY package.json /src
|
| RUN npm install
|
| COPY . /src
|
| CMD [ "npm", "start" ]

  $ docker build -t node-base -f Dockerfile.base . # STEP 1: Compile base image
  $ docker build -t node -f Dockerfile .           # STEP 2: Compile image
  $ docker run -p 8000:8000 -d node
Multi-Stage
- Multi-Stage allows for final "clean" images that will
  contain just the application binaries, with no building
  or compilation intermediate tools needed during the build.
  This allow for much lighter final images.
                   ┌───────────────────────────────┼────────────────────────────────────────────────┐
                   │ "STANDARD" BUILD              │ multi─stage BUILD                              │
┌──────────────────┼───────────────────────────────┼────────────────────────────────────────────────┤
│Dockerfile        │ Dockerfile                    │ Dockerfile.ms                                  │
│                  │ FROM golang:alpine            │ FROM ºgolang:alpineº AS Oºbuild─envº           │
│                  │ WORKDIR /app                  │ ADD . /src                                     │
│                  │ ADD . /app                    │ RUN cd /src ; go build ─o app                  │
│                  │ RUN cd /app ; go build ─o app │                                                │
│                  │ ENTRYPOINT ./app              │ FROMºalpineº                                   │
│                  │                               │ WORKDIR /app                                   │
│                  │                               │ COPY ──from=Oºbuild─envº /src/app /app/        │
│                  │                               │ ENTRYPOINT ./app                               │
├──────────────────┼───────────────────────────────┼────────────────────────────────────────────────┤
│ Compile image    │ $ docker build . ─t hello─go  │ $ docker build . ─f Dockerfile.ms ─t hello─goms│
├──────────────────┼───────────────────────────────┼────────────────────────────────────────────────┤
│ Exec container   │ $ docker run hello─go         │ $ docker run hello─goms                        │
├──────────────────┼───────────────────────────────┼────────────────────────────────────────────────┤
│ Check image size │ $ docker images               │ $ docker images                                │
└──────────────────┴───────────────────────────────┴────────────────────────────────────────────────┘
  Distroless
- "Distroless" images contain only your application and its runtime dependencies.
(not package managers, shells,...)
Notice: In kubernetes we can also use init containers with non-light images
        containing all set of tools (sed, grep,...) for pre-setup, avoiding
        any need to include in the final image.

Stable:                      experimental (2019-06)
gcr.io/distroless/static     gcr.io/distroless/python2.7
gcr.io/distroless/base       gcr.io/distroless/python3
gcr.io/distroless/java       gcr.io/distroless/nodejs
gcr.io/distroless/cc         gcr.io/distroless/java/jetty
                             gcr.io/distroless/dotnet

Ex java Multi-stage Dockerfile:
@[https://github.com/GoogleContainerTools/distroless/blob/master/examples/java/Dockerfile]
 ºFROMºopenjdk:11-jdk-slim  ASOºbuild-envº
  ADD . /app/examples
  WORKDIR /app
  RUN javac examples/*.java
  RUN jar cfe main.jar examples.HelloJava examples/*.class

  FROM gcr.io/distroless/java:11
  COPY --from=Oºbuild-envº /app /app
  WORKDIR /app
  CMD ["main.jar"]
rootless Buildah
@[https://opensource.com/article/19/3/tips-tricks-rootless-buildah]
- Building containers in unprivileged environments
    Buildah is a tool and library for building Open Container Initiative (OCI) container images.
    In previous articles, including How does rootless Podman work?, I talked
    about Podman, a tool that enables users to manage pods, containers, and container images.
    Buildah is a tool and library for building Open Container Initiative (OCI)
    container images that is complementary to Podman. (Both projects are
    maintained by the containers organization, of which I'm a member.) In this
    article, I will talk about rootless Buildah, including the differences between it and Podman.
alpine how-to
Next image (golang) is justº6Mbytesºin size:
@[https://hub.docker.com/r/ethereum/solc/dockerfile]
Dockerfile:
    01	FROM alpine
    02	MAINTAINER chriseth 
    03	
    04	RUN \
    05	  apk --no-cache --update add build-base cmake boost-dev git ⅋⅋ \
    06	  sed -i -E -e 's/include ˂sys\/poll.h˃/include ˂poll.h˃/' /usr/include/boost/asio/detail/socket_types.hpp  ⅋⅋ \
    07	  git clone --depth 1 --recursive -b release https://github.com/ethereum/solidity                           ⅋⅋ \
    08	  cd /solidity ⅋⅋ cmake -DCMAKE_BUILD_TYPE=Release -DTESTS=0 -DSTATIC_LINKING=1                             ⅋⅋ \
    09	  cd /solidity ⅋⅋ make solc ⅋⅋ install -s  solc/solc /usr/bin                                               ⅋⅋\
    10	  cd / ⅋⅋ rm -rf solidity                                                                                   ⅋⅋ \
    11	  apk del sed build-base git make cmake gcc g++ musl-dev curl-dev boost-dev                                 ⅋⅋ \
    12	  rm -rf /var/cache/apk/*

Notes:
  - line 07: º--depth 1º: faster cloning (just last commit)
  - line 07: the cloned repo contains next º.dockerignoreº:
    01 # out-of-tree builds usually go here. This helps improving performance of uploading
    02 # the build context to the docker image build server
    03*/build*
    04
    05 # in-tree builds
    06*/deps*
TODO Classify
Troubleshooting
- /var/lib/docker/devicemapper/devicemapper/data consumes too much space
$ sudo du -sch /var/lib/docker/devicemapper/devicemapper/data
14G     /var/lib/docker/devicemapper/devicemapper/data
[REF@StackOverflow]
Live Restore
@[https://docs.docker.com/config/containers/live-restore/]
Keep containers alive during daemon downtime