This "cheat-sheet with steroids" presents an always-growing set of "TOP Java recipes" well ordered&classified extracted from DZone/InfoQ/Redhat Developrs/Medium/bealdun/ articles, github code, ... as well as author's own experience in different projects.
External  Links
-Lang&VM specs
-Adpot OpenJDK prebuilt binaries
- @[]
- @[]

-(Active) Java JVM List
-Eclipe Tools for [QA]
-Excellent Java Blog (spanish)
-JAVA Enhancements proposals
-Douglas Craigs Schmidt Java Lessons:
-Awesome Java

-@[] Collections of Production-ready software developed with Java,
   from DDBBs, caches, servers, ...

Bibliography - Effective Java 3rd Edition, Joshua Bloch ISBN-10: 0134685997 ISBN-13: 978-0134685991 - Java Performance: The Definitive Guide, by Scott Oaks ISBN-10: 1449358454 ISBN-13: 978-1449358457
• Avoid Nulls, prefer final:
  SomeClass instance = null;
  if (condition1) {   ← Initialization depends on runtime checks (condition1/2/...).
      instance = val1;   We can forget to add some condition wrongly leaving
  }                     instance to a false null. (Probably in a later
  if (condition2) {   ← interaction, weeks or months after first implementatio
      instance = val2;   when instance is not fresh in our memory)
                      ← At this point we forgot to check for condition3, or maybe
                        condition3 didn't apply at code writing, but appear later on
                        after some unrelated change.
  serviceX.functionN(..., instance, ...) ← At this point instance can be undefined

 ºfinalºint instance;  ← "final" keywords force compiler to check every possible execution
  if (condition1) {     condition and fail to compile is some branch does not initiliaze
      instance = val1;   instance properly.
  }                     - Final values are safer (to and developer mistakes concurent code).
  if (condition2) {
      instance = val2;   - Final values makes JIT happier.
  servX.funcN(instance) ← At this point compiler will abort if there is some
                          execution branch where instance rest un-initialized.

  See also:
  - Inmmutable Objects are faster (and safer)
   (A final variable is the simplest example of an immutable object)

• Catch blocks must rethrow if the exception can not be fixed on the catch block.
  HINT: 99% of times it can NOT be fixed on the catch block.

RºWRONGº                     BºRIGHTº
  ========================     ========================
  } catch (IOException e) {    } catch (IOException e) {
      e.printStackTrace();         throw new RuntimeException(e);
  }                            }

  The correct "right" code looks weaker, since the exception is propagated ... and that's
  a good thing, because the error has not been fixed, but some other piece of code (or maybe
  final user) will be notified. In the "wrong" code the code is just hidden.  This will
  trigger undefined behaviour, null pointers, nightmares and developer's non-paid extra hours.

• Avoid Strings as much as possible. You are converting an strongly typed
  language in a weak runtime one.
  If forced to use strings, try to protect them with (Checkers framework) @Fenum("country")

• Avoid huge interfaces. Prefer small, decoupled ones, with few exposed methods
  decoupled interfaces related to security from storage from cache from ...

• Avoid interfaces that will ever be implemented by a single class, specially if such
  class is a data-like one with all inmutable (final) fields. Data classes can be
  considered interfaces on themself since they indicate the contract for the data.

• Avoid checked exceptions. It was an error in the initial design of the Java language.
  Each time a checked exception is found convert to a runtime (subclass) exception.
  More info at: @[]

(Forcibly incomplete but still quite pertinent list of core developers and companies)
- James Arthur Gosling:  Founder and lead-designer of the Java Programming Language

- Joshua J. Bloch:
  - Author of the book "Effective Java" (a must read)
    and co-author of two other books:
    - Java Puzzlers (2005)
    - Java Concurrency in Practice (2006)
  - Led the design and implementation of numerous
    Java platform features, including the
    Java Collections Framework, the java.math package,
    and the assert mechanism.

- Julien Viet:
  Core developer of VertX, crash (,
  and many other interesting Java software.

- Ben Evans:
  - jClarity Co-founder.
  - Java Champion, author, speaker, consultant
  - voting member on Java’s governing
  - Author of 5 books
    - “The Well-Grounded Java Developer”,
    - new editions of “Java in a Nutshell”,
    - “Java: The Legend” and “Optimizing Java”.
    - Track lead for Java / JVM at InfoQ.
  ... I will explain how we might start to implement a JVM from scratch.. then
   we will show how the Rust programming language provides a good alternative
   implementation language for our simple JVM. We will showcase some basic Rust
   language features and show how they can be used to provide a version of our
   JVM that is much cleaner and easier to understand, even if you've never
   seen Rust code before!"""

- Emmanuel Bernard: Distinguished Engineer and Chief Architect Data at
  Red Hat (middleware). His work is Open Source. He is most well known for his
  contributions and lead of the Hibernate projects as well as his contribution
  to Java standards. His most recent endeavour is Quarkus (A Kubernetes Native
  Java stack tailored for GraalVM and OpenJDK HotSpot, crafted from the best of
  breed Java libraries and standards).

- Chad Arimura, vice president of Java Developer Relations at Oracle.

What's new
v.17 2021-09-14
- 1st long-term support (LTS) release after JDK 11 (2018).
- 14 JEPs includes:
  - 409:ºSealed Classesº
         package com.example.geometry;

         public abstract sealed class Shape           ← Syntax 1: Permitted classes in
             permits com.example.polar.Circle,                    different file
                     com.example.quad.simple.Square { ... }

         abstract sealed class Root { ...             ← Syntax 2: Permitted classed inside
             final class A extends Root { ... }                   parent class.
             final class B extends Root { ... }
             final class C extends Root { ... }

  - 412: Foreign Function⅋ Memory API (Incubator)
         API by which Java programs can interoperate with code and
         data outside of the Java runtime. By efficiently invoking foreign
         functions (i.e., code outside the JVM), and by safely accessing
         foreign memory (i.e., memory not managed by the JVM),ºthe API enables º
       º Java programs to call native libraries and process native data º
       º without the brittleness and danger of JNI.º
         - Goals:
           - Ease of use, replacing JNI with better, pure-Java development model.
           - Performance: comparable or better than JNI and sun.misc.Unsafe.
           - Generality : operate on different kinds of foreign memory
                          (e.g., native memory, persistent memory, and managed
                          heap memory) and, over time, to accommodate other platforms
                          (e.g., 32-bit x86) and foreign functions written in languages
                          other than C (e.g., C++, Fortran).
           - Safety     :

  - 306: Restore Always-Strict Floating-Point Semantics
         - Ease development of numerically-sensitive libraries, including
           java.lang.Math and java.lang.StrictMath.

    Provide more regularity in a tricky aspect of the platform.

  - 356: Enhanced Pseudo-Random Number Generators            [cryptography]
  - 403: Strongly Encapsulate JDK Internals
         it will no longer be possible to bypass strong encapsulation
         via --illegal-access flag.
  - 406: Pattern Matching for switch (Preview)
  - 407: Remove RMI Activation
  - 410: Remove the Experimental AOT and JIT Compiler
  - 411: Deprecate the Security Manager for Removal
  - 414: Vector API (Second Incubator)
  - 415: Context-Specific Deserialization Filters

  - 382: New macOS Rendering Pipeline
  - 391: macOS/AArch64 Port
  - 398: Deprecate the Applet API for Removal

v.15 2020-09-15 @[] - BºRECORDS!!!: non-verbose inmutable classes.º Ex.: Alt 1: short writing ºrecord Personº(String name, int age) { } Alt 2: Validating constructor ºrecord Personº(String name, int age) { Person { ← (optional) constructor can NOT compute state, only validate|throw if (age ˂ 0) throw new IllegalArgumentException("Too young"); } } var john = newºPersonº("john", 76); Records: - can NOT have any additional (internally computed) private or public instance fields. - can NOT extend classes. - are ALWAYS FINAL (cannot be extended), - production ready ZGC low-latency garbage collector. "...Oracle expects ZGC to be quite impactful for a multitude of workloads, providing a strong garbage collection option for developers..." - text block:(JEP 378): make it easy to express strings spanning several lines ("templates, ...") - JEP 360: Sealed Classes (Preview) Avoid to extend class not designed to be extended. (control how class is used by third parties) - JEP 383: Foreign Memory Access API: (Preview) - access "foreign"(outside Java HEAP) memory. Part of Project Panama, trying a better connection with native (C/Assembler) code.
v.14 2020-03-?? └ More container awareness... - NUMA container support added to hotspot (JDK-8198715) - Add Container MBean to JMX (JDK-8199944) └BºRecord-types in Java 14:º Records aim to enhance the language's ability to model Bº"plain data" aggregates with less ceremony.º └ Shenandoah GC Shenandoah GC in JDK 14, Part 1: Self-fixing barriers By Roman Kennke March 4, 2020 The development of the Shenandoah Garbage Collector (GC) in the upcoming JDK 14 has seen significant improvements. The first one covered here (self-fixing barriers) aims to reduce local latencies that are spent in barrier mid- and slow paths. The second will cover concurrent root processing and concurrent class unloading. This article discusses concurrent roots processing and concurrent class unloading, both of which aim to reduce GC pause time by moving GC work from the pause to a concurrent phase.
v 12,13 @[] @[] @[] @[] └ More container awareness... - add container support to jhsdb command (JDK-8205992) - Flight Recorder impovements for containers (JDK-8203359) - Improve container support when Join Contollers option is used (JDK-8217766) - Impove systemd slice memory limit suppot (JDK-8217338) - JFR jdk.CPUInformation event epots incorrect info. when running in Docker container (JDK-8219999)
v.11(LTS) 2018/09 @[] @[] - More container awaeness... - Remove -XX:+UnlockExperimentalVMOptions, -XX:+UseGroupMemoryLimitFoHeap (JDK-8194086) - jcmd -l and jps commands do not list JVMs in Docker containers (JDK-8193710) - Container Metrics (-XshowSettings:system) (JDK-8204107) - Update CPU count algorithm when both cpu shares and quotas are used (JDK-8197867) --XX:+PrefectContainerQuotaForCPUCount New major features: - Autocompilation(JEP 330). Next code will execute: $ java - New string methods: - isBlank(): true for Empty or only white spaces strings - lines() : returns string[] that collects all substrings split by lines. System.out.println( "JD\nJD\nJD".str.lines().collect(Collectors.toList()) ); - strip() : similar to trim() but unicode-aware stripLeading() stripTrailing() - repeat(int n) : repeats string n times. - Local-Variable Syntax for Lambda Parameters (JEP 323) (var s1, var s2) -˃ s1 + s2 - While it's possible to just skip the type in the lambda it becomes a need when for annotations like @Nullable - Nested Based Access Control (fix some issues when using (discouraged-)reflection. - Dynamic Class-File Constants(JEP 309) - class-file format now extends support a new constant pool form: -ºCONSTANT_Dynamicº, reduce the cost and disruption of developing new forms of materializable class-file constraints. - Epsilon: A No-Op Garbage Collector(JEP 318): - Experimental - Unlike the JVM GC which is responsible for allocating memory and releasing it, Epsilon only allocates memory. Useful for: -ºExtremely short lived jobsº - Performance testing - Memory pressure testing - VM interface testing - Last-drop latency improvements - Last-drop throughput improvements - Remove the JavaEE and CORBA Modules(JEP 320):, java.xml.bind, java.activation,, java.corba, java.transaction,,, jdk.xml.bind RºWARNº: EE modules contain the support for JAXB and SOAP, still in relatively widespread use. - Check carefully whether build scripts need to be modified. - Flight Recorder(JEP 328) - profiling tool gathering diagnostics and profiling data - negligible performance overhead (˂1%): -ºCan be used in productionº - HTTP Client (JEP 321) - HTTP/1.1,ºHTTP/2 and WebSocketsº - Designed to improve overall performance of sending requests by a client and receiving responses from the server. - TLS 1.3 - Convenient Reading/Writing Strings to/from Files - readString() - writeString() Path path = Filesº.writeStringº( Files.createTempFile("test", ".txt"), "This was posted on JD"); System.out.println(path); String s = Filesº.readStringº(path); System.out.println(s); //This was posted on JD - ChaCha20,Poly1305 Crypto (JEP 329) - implemented in the SunJCE provider. - Improve (string and array)Aarch64 processor Intrinsics(JEP 315) - implement also new intrinsics for (java.lang.Math) sin, cos, and log functions. - ZGC:(JEP 333) - Scalable Low-Latency Garbage Collector - Experimental - low latency GC. - sub-10ms pause times, less than 15% perf.penalty. - Deprecate Nashorn JS Engine(JEP 335)
v.10 (2018/03) - More container awareness... - Improve heap memory allocations (JDK-8196595): o --XX:InitialRAMPercentage, --X:MaxRAMPercentage and -XX:MinRAMPercentage --XX:InitialRAMFraction , --X:MaxRAMFraction and -XX:MinRAMFraction Rºdeprecatedº - Total number of CPUs available to the Java Process calculated from --cpus, --cpu-shares, --cpu-quota (JDK-8146115) o Use --XX:-UseContainerSupport to return to the old behaviour o # processors that the HVM will use internally -XX:ActiveProcessorCount - Attach in linux became elative to /proc/pid/root and namespace aware (jcmd, jstack,...) - Read also: JVMs before 10 had been implemented before cgroups, hence not optimized for executing inside a container. - Application Data-Class Sharing (JEP ???) - extends existing Class-Data Sharing ("CDS") for allowing application classes to be placed in the shared archive in order to improve startup and footprint. - Parallel Full GC for G1 - improves G1 worst-case latencies - Garbage Collector Interface - improves source code isolation of different GCs. - Consolidate JDK Forest into a Single Repository - Local-Variable Type Inference - declarations of local variables with initializers - introducesºvarº - Remove Native-Header Generator Tool (javah) superseded by superior functionality in javac. - Thread-Local Handshakes: - Allows to execute a callback on threads without performing a global VM safepoint. Makes it both possible and cheap to stop individual threads and not just all threads or none. - Time-Based Release Versioning - Root Certificates, providing a default set of root CAs in the JDK. - Heap Allocation on Alternative Memory Devices: - enables the HotSpot VM to allocate the Java object heap on an alternative memory device, such as an NV-DIMM, specified by the user. - Experimental Java-Based JIT Compiler Graal: - Linux/x64 platform only - Additional Unicode Language-Tag Extensions - Removed Features and Options:
v.9(2017/09) └ --XX:ParallerGCThreads and --XX:CICompilerCount are set based on Containers CPU limits (can be overriden) - Calculated from --cpuset-cpus └ Memory Configuration for containers -XX:+UnlockExperimentalVMOptions -XX:+UseGroupMemoryLimitFoHeap - set -XX:MaxRAMFraction to 2 (default is 4) - Java Platform Module System: - based on Project Jigsaw - divides the JDK into a set of modules for combining at run, compile, or build time. - enabling understanding of dependencies across modules. - allows developers to more easily assemble and maintain sophisticated applications. - allows to scale down to smaller devices. - improves security and performance. - aspects include: - application packaging - JDK modularization - reorganizing source code into modules. - Build system is enhanced to compile modules and enforce module boundaries at build time. (Java 9 allows illegal reflective access to help migration) - Reactive Streams: ( is a small spec, also adopted in Java 9, that defines the interaction between asynchronous components with back pressure. For example a data repository — acting as Publisher, can produce data that an HTTP server — acting as Subscriber, can then write to the response. The main purpose of Reactive Streams is to allow the subscriber to control how fast or how slow the publisher will produce data. - ahead-of-time (AoT) compilation (experimental) - improve startup time, with limited impact on peak performance. -ºREPL (read-eval-print loop)º - jShell: interactively evaluates statements "a la script". - tab completion - automatic addition of needed terminal semicolons. - jShell API for IDE integration. - jShell: interactively evaluates statements "a la script". - Streams API Enhancements - Java 8 Stream API allows processing data declaratively while leveraging multicore architectures. - Java 9 adds methods to conditionally take and drop items from Stream, iterate over Stream elements, and create a stream from a nullable value while expanding the set of Java SE APIs that can serve as Streams sources. - Code cache can be divided in Java 9 - code cache can now be divided into segments to improve performance and allow extensions such as fine-grained locking resulting in improved sweep times - (Datagram Transport Layer Security) DTLS security API - prevent eavesdropping, tampering, and message forgery in client/server communications. - Java 9 deprecates and removes: - Applet API and appletviewer (alternative: Java Web Start) - Concurrent Mark Sweep (CMS) GC. - JVM TI (Tool Interface) hprof (Heap Profiling) agent, superseded in the JVM. - jhat tool, obsoleted by superior heap visualizers and analyzers.
JAVA 8 └ 8u131 First version to support Containers RºWARNº: Do not use any version below that. (TODO) @[] @[] - TLS enh. Backported to 1.8 (HTTP2) @[]
JVM internals
JVM Troubleshooting and Monitoring
- Flight Recoder:

- Built-in tools in JDK:           - Docker commands:
  - jstat                            - stats
  - jcmd                             - inspect
  - jmap (Not recomended)            - top
  - jhat ...
  - jstack

                                   - Container aware tools
- Expose JMX port                    - ctop
  - VisualVM                         - dstat
  - jConsole
                                   - CAdvisor
- Micrometer@[#micrometer_summary] - Prometheus
- Others: New Relic, Stackiy,      - Docker EE, Datadog, Sysdig,...
  AppDynamics, Dynatrace, ...

JVM Safepoints
- @[]    [TODO]
   - Definition: Mutator threads: threads which manipulate the JVM heap.
     all Java Threads are mutators, Non-Java (native?) threads may also be
     regarded as mutators when they call into JVM APIs which interact with the heap.
   - Safepoint:
     - range of execution where the state of the executing thread
       is well described since thread is NOT interacting with the heap:
     - used to put mutator threads on hold while the JVM 'fixes stuff up'
     - particularly useful to let JVM examine|change the heap (GC, ...)
       (no objects still alive and referenced from the stack)
   - thread is at safepoints when:
     - thread de-scheduling events: thread blocked on lock/synch.lock, waiting on a monitor, parked,
       or blocked on blocking-IO.
     - thread is executing JNI code.
   - thread is NOT at safepoints when:
     - executing bytecode (maybe, but not for sure).
     - thread interrupted (by the OS) while not at a safepoint.
   - JVM cannot force any thread into a safepoint state but ...
     JVM can stop threads from leaving a safepoint state.
     Q: How then bring ALL threads into a safepoint:
     A: Java threads poll a 'safepoint flag' (global or thread level) at 'reasonable' intervals
        and transition into a safepoint state (thread is blocked at a safepoint) when active.
        Q: how to avoid waste time checking if C1/C2 (client/server) JIT compilers need to stop?
           how to keep safepoint polls to a minimum.
        A: considerations combined lead to the following locations for safepoint polls:
           - Between any 2 bytecodes while running in the interpreter (effectively)
           - On 'non-counted' loop back edge in C1/C2 compiled code
           - Method entry (Zing,...) or exit (OpenJDK,...) in C1/C2 compiled code.
    public class WhenWillItExit {
      public static void main(String[] argc)
        throws InterruptedException {
        const UP = Integer.MAX_VALUE;
        final Thread t = new Thread(() -˃ {
          long l = 0;
          for (int i = 0; i ˂ UP ; i++) {     ┐ ºResult:º
            for (int j = 0; j ˂ UP ; j++) {   │
              if ((j ⅋ 1) == 1) l++;          ├ long-type loops: 'uncounted' code.   ºsafepoints injected at each loop.º
            }                                 │  int-type loops:   'counted' code.ºNo safepoints injected.º
          }                                   ┘                  - Gº(Much) Better performanceº  BUT ...
          System.out.println("How Odd:" + l);                      Rºother threads forced to suspend at theirº
        });                                                        Rºnext safepoint operation.º
        t.setDaemon(true);                    ┐
        t.start();                            ├ ºExpected:ºexit in ~5 seconds.
        Thread.sleep(5000);                   ┘ ºResult  :ºno safepoints means threads, JMX connections, ...
      }                                                    will have to expect to daemon thread to exits to
    }                                                      be able to reach a global safepoint. Use -Xint to
                                                           disable C1/C2 compilation or replace int → long
                                                           in loop index to respect the 5 second behaviour.

    (See original source for many interesing details on safepoint tunning)

     - Safepoint polls are dispersed at fairly arbitrary points and depending
       on execution mode, mostly at uncounted loop back edge or method return/entry.
     - Bringing the JVM to aºGLOBAL safepointºis high cost
   ☞ - For real-time applications,Bºit's critical to know about safepointsº
       avoiding 'counted' code.
     -XX:+PrintApplicationStoppedTime will log contained safepoint pauses.

BºProblems with (most) Sampling Profilersº:
- @[]
  - A large number of samples needed to get statistically significant results.
  - profiler should sample all points in a program run with equal probability
  - Generic profilers rely on the JVMTI spec:
    - JVMTI offers Rºonly safepoint sampling stack trace collection optionsº:
      - Rºonly the safepoint polls in the running code are visibleº skipping optimizedº
        Rº(counted-code) for-loops!!!º
      - Rºsamples are biased towards the next available safepoint poll locationº
      - A sample profiler can blame a "cheap method" 9 levels down the stack when the
        real culprit is the topmost method loop.
- simple facade to instrumentation clients for
- instrument JVM-based apps without vendor lock-in.
  (Think SLF4J, but for application metrics! supporting AppOptics,
   Atlas, Datadog, Dynatrace, Elastic, Ganglia, Graphite,
   Influx,Instana, JMX (hierarchical mapping), KairosDB, New Relic,
   Prometheus, SignalFx, Stackdriver, StatsD, Wavefront,)

- Recorded metrics are intended to be used to
  observe/alert/react to current/recent operational state.

Bºout-of-the-box instrumentation provided by Micrometerº
  - JVM Metrics on classloaders, memory, garbage collection,
    threads, etc.
  - Spring Boot 2.0.0.M5+: Micrometer used as instrumentation library powering
    the delivery of application metrics from Spring.
    2 simple steps setup: [low_code]
    - Declare maven dependency.
    - Add config. to application.yml

  @[] (legacy support)
drop─down support for Spring Boot 1.5.x.
  - Cache instrumentation for most popular caching frameworks.
  - OkHttpClient Instrumentation

Guides: [TODO]
Inside the JVM
- @[]

- JVM anatomy Park:

      │ JVM StartUp thread │
       v        v           v
       GC      Compiler   JAVA
     Threads   Thread    Threads
     ┌┐┌┐┌┐┌┐    ┌┐      ┌┐┌┐┌┐┌┐┌┐┌┐┌┐...
     ││││││││    ││      ││││││││││││││
     ││││││││    ││      ││││││││││││││
     ││││││││    ││      ││││││││││││││
     ││││││││    ││      ││││││││││││││
     ││││││││    ││      ││││││││││││││
     ········    ··      ··············

BºJIT compiler optimization levels:º
  - cold
  - warm
  - hot
  - very hot (with profiling)
  - scorching.
  The hotter the optimization level, the better the
  expected performance, but the higher the cost in terms of
  CPU and memory.  See also @[#jvm_app_checkpoint]
JVM Implementations

In practical terms, there is only one set of source code for the JDK.

- Anyone can take that source code,build and publish it.
- The certification process ensures that the build is valid.
- Certification run by the Java Community Process, which provides a
  Technology Compatibility Kit (TCK, sometimes referred to as the JCK).
  If build passes the TCK then it is described as "Java SE compatible".
  Note: Built can NOT be referred to as "Java SE" without paying a commercial
        license from Oracle.
        Ex: AdoptOpenJDK passing TCK are "Java SE compatible" (vs "Java SE").
  -RºWARNº: certification is currently on a trust-basis: results are
            not submitted to the JCP/Oracle for checking, neither can
            be made public.

- Existing builds include:
  - Oracle Java
  - OpenJ9      (Eclipse "IBM")
    └ Pre-built binaries available at AdoptOpenJDK
    └ Compared to Oracle's HotSpot VM, i touts higher
      start-up performance and lower memory consumption
      at a similar overall throughput.
    └ JIT with all optimization levels.
  - OpenJDK
  - GraalVM
  - Bellsoft Liberica:
    - $free TCK verified OpenJDK distribution for x86, ARM32 and ARM64.
  - Azul Systems
  - Sap Machine
    JDK for Java 10 and later under the GPL+CE license.
    They also have a commercial closed-source JVM
  - Amazon Corretto:
    zero-cost build of OpenJDK with long-term support that passes the
    TCK. It is under the standard GPL+CE license of all OpenJDK builds.
    Amazon will be adding their own patches and running Corretto on AWS
• Reference (non-mandatory) Linux OS setup for JVM server tasks extracted from:

  ...  If you're running on Linux, you must ensure that:

  $º$ sysctl vm.max_map_count   º ← Ensure it's greater than or equal to 524288
  $º$ sysctl fs.file-max        º ← Ensure it's greater than or equal to 131072
  $º$ ulimit -n                 º ← Ensure at least 131072 file descriptors
  $º$ ulimit -u                 º ← Ensure at least 8192 threads
     Tune current values like:
                                            Modify Kernel limits (permanently)
  + sysctl -w vm.max_map_count=524288     ← by adding those lines to º/etc/sysctl.confº
  + sysctl -w fs.file-max=131072            (or /etc/sysctl.d/99-sonarqube.conf )

  $º$ ulimit -n 131072                  º ← Modify user-limits (temporally)
  $º$ ulimit -u 8192                    º   (changes lost at system restart)

                                            Modify user limits (permanently):
                                            Add next lines to:
  + sonarqube   -   nofile   131072       ← Alt 1: º/etc/security/limits.confº (non SystemD)
  + sonarqube   -   nproc    8192

    [Service]                             ← Alt 2: SystemD unit definition     (SystemD)
  + LimitNOFILE=131072                      .
  + LimitNPROC=8192

         │    STACK ("SMALL")          │ HEAP  ("HUGE")
         │ private to each Thread      │ Shared by Threads
Contain  │ - references to heap objects│ - objects
         │ - value types               │ - instance fields
         │ - formal method params      │ - static fields
         │ - exception handler params  │ - array elements

* 1: Ref(erence) types on the stack point to real object in HEAP memory.

Reference Types regarding how the object on the heap is eligible for garbage collection
│ STRONG  │ - Most popular.
│         │ - The object on the heap it is not garbage collected
│         │   while there is a strong reference pointing to it, or if it is
│         │   strongly reachable through a chain of strong references.
│ WEAK    │ - most likely to not survive after the next garbage collection process.
│         │ - Is created like
│         │    WeakReference˂StringBuilder˃ reference =
│         │     = new WeakReference˂˃(new StringBuilder());
│         │ - ºEx.use case: caching:º
│         │   We let the GC remove the object pointed to by the weak reference,
│         │   after which a null will be returned
│         │   See JDK implementation at
│         │   @[]
│ SOFT    │ - used for more memory-sensitive scenarios
│         │ - Will be garbage collected only when the application is running low on memory.
│         │ - ºJava guarantees that all soft referenced objectsº
│         │  ºare cleaned up before throwing OutOfMemoryErrorº
│         │ - is created as follows:
│         │   SoftReference˂StringBuilder˃ reference = new SoftReference˂˃(new StringBuilder());
│ PHANTOM │ - Used to schedule post-mortem cleanup actions, since we know for
│         │   sure that objects are no longer alive.
│         │ - Used only with a reference queue, since the .get() method of
│         │   such references will always return null.
│         │ - ºThese types of references are considered preferable to finalizersº
Force string pool reuse
- Strings are immutable.
- Stored on the heap
- Java manages a string pool in memory,
  reusing strings whenever possible.

String string01 = "297",                                string01 == string02 : true
       string02 = "297",                                string01 == string03 : Rºfalseº
       string03 = new Integer(297).toString(),          string01 == string04 : true
       string04 = new Integer(297).toString()º          string05 == string01 : Rºfalseº
       string05 = new String("297")

º1: RºPool reuse does not work for dynamically created stringsº
*2: If we consider that the computed String will be used quite often,
    we can force the JVM to add it to the string pool by adding the
    .intern() method at the end of computed string.
JVM analyzes the variables from the stack and "marks" all the objects that need to be kept alive.
Then, all the unused objects are cleaned up.

The more garbage there is, and the fewer that objects are marked alive, the faster the process is.

To optimize even more heap memory actually consists of multiple parts (Java 8+):

  │ HEAP     │
  │ SPACES   │
  │ Eden     │ * object are place here upon creation.
  │          │ * "small" ─→ gets full quite fast.
  │          │ * GC runs on the Eden space and marks objects as alive
  │ S0       │ * Eden Objects surviving 1st GC are moved here
  │          │
  │ S1       │ * Eden Objects surviving 2nd GC are moved here
  │          │ * S0   Objects surviving     GC are moved here
  │ Old      │ * Object survives for "N" rounds of GC (N depends on
  │          │   implementation), most likely that it will survive
  │          │   forever, and get moved here
  │          │ * Bigger than Eden and S0,S1. GC doesn't run so often
  │ Metaspace│ * metadata about loaded classes
  │          │   (PermGen Before Java 8)
  │ String   │
  │   pool   │
GC Types
- default GC type is based on the underlying hardware
- programmer can choose which one should be used

   GC TYPE     | Description  / Use-Cases
|Serial GC     | - Single thread collector.
|              | - ºHalt all app threads while executingº
|              | - Mostly applies to ºsmall apps with small data usageº
|              | - Can be enabled through : Oº-XX:+UseSerialGCº
|Parallel GC   | - Multiple threads used for GC
|              | - ºHalt all app threads while executingº
|              | - Also known as throughput collector
|              | - Can be enabled through : Oº-XX:+UseParallelGCº
|Mostly        | - works concurrent to the application, "mostly" not halting threads
|Concurrent GC | - "mostly": There is a period of time for which the threads are paused.
|              |    Still, the pause is kept as short as possible to achieve the best GC performance.
|              | - 2 types of mostly concurrent GCs:
|              |   * Garbage First - high throughput with a reasonable application pause time.
|              |                   - Enabled with the option: Oº-XX:+UseG1GCº
|              |   º Concurrent Mark Sweep: app pause is kept to minimum. ºDeprecated as Java9+*
|              |                   - Enabled with the option: Oº-XX:+UseConcMarkSweepGCº

See also:
Optimization Tips
- To minimize the memory footprint, limit the scope of the variables as much as possible.

- Explicitly refer to null obsolete references making them eligible for GC.

- Avoid finalizers. They slow down the process and they do not guarantee anything.
  Prefer phantom references for cleanup work.

- Do not use strong references where weak or soft references apply.
 ºThe most common memory pitfalls are caching scenarios,when dataº
 ºis held in memory even if it might not be needed.º

- Explicitly specify heap size for the JVM when running the application:
  -  allocate a reasonable initial and maximum amount of memory for the heap.
   OºInitial heap size -Xms512m º – set initial heap     size to  512 megabytes
   OºMaximum heap size -Xmx1024mº – set maximum heap     size to 1024 megabytes
   OºThread stack size -Xss128m º – set thread stack     size to  128 megabytes
   OºYoung genera.size -Xmn256m º – set young generation size to  256 megabytes

REF: @[]
    - Initial Heap Size: -Xms: ˃= 1/64th of physical memory || reasonable minimum.
    - Maximum Heap Size: -XmX: ˂= 1/4 th of physical memory || 1GB.
                  - Set -Xms equal to -Xmx to prevent pauses caused by heap expansion
                  ☞BºSetting Xms/Xmx increase GC predictabilityº.

    JVM settings are recommended for:
    -server               -server                   -server
    -Xms24G -Xmx24G        -Xms4G -Xmx4G            -Xms32G -Xmx32G

                      -XX:MaxGCPauseMillis=200     ← soft goal (JVM) best effort
                      -XX:ParallelGCThreads=20     ← value depends on hosting hardware
                      -XX:ConcGCThreads=5          ← value depends on hosting hardware
                      -XX:InitiatingHeapOccupancyPercent=70 ← Use 0 to force constant
                                                              GC cycles

    Rº There are 600+ arguments that you can pass to JVM to fine-tune GC and memory º
    Rº If you include other aspects, the number of JVM arguments will easily cross  º
    Rº 1000+. º
       (Or why Data Scientist end up using Python)

- If app OutOfMemoryError-crashes, extra info about memory leak can be obtained through
  Oº–XX:HeapDumpOnOutOfMemoryº, creating a heap dump file

- Use Oº-verbose:gcº to get the garbage collection output.

- Eclipse Memory Analyzer Manual:

BºCommom Memory Leaks pitfallsº: @[] - Logging frameworks (Apache Commons Logging/log4j/java.util.logging/...) trigger classloader leaks if Rºlogging framework is supplied outside ofº Rºthe web application, such as within the Application Server.º -BºAdd next cleanup code to ServletContextListenerº: org.apache.commons.logging.LogFactory. // Alt.1 release(Thread.currentThread().getContextClassLoader()); org.apache.commons.logging.LogFactory.release( // Alt.2 this.getClass().getClassLoader() );
˂˂AutoClosable˃˃ (1.7+)
- The java garbage collector can not automatically clean any
  other resource appart from memory. All resources related to
  I/O (virtual/physical devices) must be closed programatically,
  for example sockets, http connections, database connections, ...
  since neither the compiler, not the runtime can not take control of
  external (non-controlled) devices/resources.

  Java 1.7+ includes the interface java.lang.AutoClosable to simplify
  the resource cleaning.

  When a class representing an external resource implement this
  interface and isºused inside a try-with-resourcesº, its close
  method will be invoqued automatically (the compiler will add
  the required code).

  Most core java I/O classes  already implemente this interface.

  public class MyClassWithExternalResources
  implements ºjava.lang.AutoCloseableº, ... {
        private final MyExternalEventListener listener;
        private final MyIODevice device;
        private final MyHTTPConnection connection;
       ºpublic void close()º{
            listener  .close();
            device    .close();

    public class SomeLongRunningClass {
      void useManyResourcesManyTimes(String path)  {

        for (int repeat=0; repeat˂100; repeat++) {
         ºtry (MyClassWithExternalResources i = º
              º new MyClassWithExternalResources(...))º {
         º} catch( ... ) {º

         º// At this point all resouces have been closed.  º
         º// If a runtime exception exits the function the º
         º// resource is also closed.                      º
Java lang. 101
Jabba JDK Vers. Mng
• pain-free JDK installing on Linux x86/x86_64/ARMv7+, macOS, Windows x86_64.
• Support for:
  · Oracle JDK (latest-version only)
  · Oracle Server JRE (latest-version only),
  · Adopt OpenJDK (jabba >=0.8.0 is required)
        Eclipse OpenJ9
  · Zulu OpenJDK (jabba >=0.3.0 is required)
  · IBM SDK, Java Technology Edition (jabba >=0.6.0 is required)
  · GraalVM CE
  · OpenJDK
  · OpenJDK Reference Implementation
  · OpenJDK with Shenandoah GC (jabba >=0.10.0 is required)
  · Liberica JDK
  · Amazon Corretto

$º$ curl -sL | bash º
$º$. ~/.jabba/                                                    º 
   Use $º... | bash --skip-rc º to avoid modifying common rc files.
   In that case add next lines to .bashrc / ...
 + export JABBA_VERSION=...
 + [ -s "$JABBA_HOME/" ] && source "$JABBA_HOME/"

$º$ jabba ls-remote                 º ←  list available JDK's
$º$ jabba ls-remote zulu@~1.8.60    º ← Narrow results
$º$ jabba ls-remote --latest=minor\ º ← semver allowed 
$º       "*@>=1.6.45 <1.9"          º

$º$ jabba ls                        º ← list all installed JDK's
$º$ jabba use adopt@1.8             º
$º$ jabba use zulu@~1.6.97          º
$º$ echo "1.8" > .jabbarc           º ← switch to JDK in .jabbarc
                                        It must be a valid YAML file.
                                        'jdk: 1.8' or simply '1.8' are valid
$º$ jabba alias default 1.8         º ← set ver. on shell (since 0.2.0)
                                        automatically used on new terminals

$º$ jabba install 1.15.0                    º ← install Oracle JDK
$º$ jabba install sjre@1.8                  º ← install Oracle Server JRE
$º$ jabba install adopt@1.8-0               º ← install Adopt OpenJDK (Hotspot)
$º$ jabba install adopt-openj9@1.9-0        º ← install Adopt OpenJDK (Eclipse OpenJ9)
$º$ jabba install zulu@1.8                  º ← install Zulu OpenJDK
$º$ jabba install ibm@1.8                   º ← install IBM SDK, Java Technology Edition
$º$ jabba install graalvm@1.0-0             º ← install GraalVM CE
$º$ jabba install openjdk@1.10-0            º ← install OpenJDK
$º$ jabba install openjdk-shenandoah@1.10-0 º ← install OpenJDK with Shenandoah GC
   everything is installed under ~/.jabba. Removing this directory clean install
$º$ jabba uninstall zulu@1.6.77             º ← uninstall JDK
$º$ jabba link system@1.8.72 \              º ← link system JDK
$º  /usr/lib/jvm/jdk1.8.0_72.jdk            º

• To modify JDK system-wide:
$º$ sudo update-alternatives --install /usr/bin/java java ${JAVA_HOME%*/}/bin/java 20000   º
$º$ sudo update-alternatives --install /usr/bin/javac javac ${JAVA_HOME%*/}/bin/javac 20000º
• To swith among GLOBAL JDK system-wide:
$º$ sudo update-alternatives --config java º

  final String
     output1 = Stringº.formatº("%s = %d", "joe", 35), ← Format string
     output2 = Stringº.formatº("%4d",100);            ←
  See also: [[Format String Checker?]]

  final String[] args = ...
  final String s1 = String.join(List.of(args),"'" )); // ← alt 1: String array to CSV
  final String s1 = String.join(","          ,args)); // ← alt 2: String array to CSV

Bºjava.util.StringJoinerº (1.8+) Concatenate Strings
- Ex:
  "[George:Sally:Fred]" may be constructed as follows:
  final StringJoiner sj = new StringJoiner(
                             ":" /* Delimiter */,
                             "[" /* prefix */,
                             "]" /* sufix */);
  String desiredString = sj.toString();


  List˂Integer˃ numbers = Arrays.asList(1, 2, 3, 4);
  String commaSeparatedNumbers =
      .map(i -˃ i.toString())
      .collect(ºCollectors.joining(", ")º);

• Concatenating strings is very slow when compared to StringBuffer/StringBuilder.
  - StringBuffer   is thread-safe.
  - StringBuilder  is faster. (when thread-safety is not needed)
    for (int i = 0; i ˂ 0 ; i++) { sbuffer.append(""); } // ← 2241 millisec
    for (int i = 0; i ˂ 0 ; i++) { sbuildr.append(""); } // ←  753 millisec º~3.0x faster!!!º
Reading file
BºReading as lines of textº
  final File input = new File("input.txt");
  final String result =
        Files.toString(input, Charsets.UTF_8);   // ← Alt 1.(Guava) Read to String
                                                     RºWARN:ºOnly for small sizesº

  final File input = new File("input.txt");
  final List˂String˃ result =
       Files.readLines(input, Charsets.UTF_8);   // ← Alt 2.(Guava) Read to List
             ^^^^^^^^^                              RºWARN:ºOnly for small sizesº
             readFirstLine() can be useful sometimes

  final File input = new File("input.txt");
  final CharSource source =
      Files.asCharSource(input, Charsets.UTF_8); // ← Alt 3.(Guava) Use CharSource
  final String result =;           // ← RºWARN:ºOnly for small sizesº

  final File input1 = new File("input1.txt"),
             input2 = new File("input1.txt");
  final CharSource
      source1 = Files.asCharSource(input1, Charsets.UTF_8),
      source2 = Files.asCharSource(input2, Charsets.UTF_8),
      source  =
         CharSource.concat(source1, source2);   // ← Alt 3.2(Guava) Concat CharSources
  final String result =;

  final FileReader reader = new FileReader("input.txt");
  final String result =
        CharStreams.toString(reader);          // ← Alt 4. (Big Files) CharStreams
  reader.close();                              // ← RºWARN:º Don't forget to close

BºRead file as bytesº
  final File file = new File("input.raw");
  final ByteSource source                      // ← Alt 1: (Guava) Use ByteSource
        = Files.asByteSource(file).
          .slice(20 /* initial offset */, 100 /* len */);
  final byte[] result =;

  FileInputStream reader =
     new FileInputStream("input.raw");        // ←   Using FileInputStream
  byte[] result =
     ByteStreams.toByteArray(reader);         // ← + ByteStreams

  final URL url =
       Resources.getResource("test.txt");     // ← Read Resource in classpath
  final String resource =
       Resources.toString(url, Charsets.UTF_8);

Reading big files ºtry (º final FileInputStream inputStream = new FileInputStream(path); final Scanner sc = ← Use Scanner to read line-by-line new Scanner(inputStream, "UTF-8"); º) {º while (sc.hasNextLine()) { final String line = sc.nextLine(); // ... do any process ... if (sc.ioException() != null) { ← // scanner captures ioExceptions // handle error // It's good to have a look } } º} finally {º ... º}º final LineIterator it = ← Alt 2. From Apache Commons IO FileUtils.lineIterator(theFile, "UTF-8"); try { while (it.hasNext()) { String line = it.nextLine(); ← Read line-by-line // ... } } finally { LineIterator.closeQuietly(it); ← Close resources }
Unix4j @[] """...Working in the finance industry in Melbourne, the authors of Unix4j spent a lot of time writing complex text processing applications in bash. Frustrated with the inherent limitations of the bash language; lack of language support, IDEs, test frameworks etc, the authors decided to try and bring the convenience of some of the Unix commands into the Java language. You may notice that the implemented commands are more bent towards text processing rather than OS and file manipulation. This is intended as we see text processing to be the real benefit of Unix4j. Not to say that this will always be the case. """ Allows for things like: -"test.txt").grep("Tuesday").sed("s/kilogram/kg/g").sort(); - java.nio.file.Files.writeString/readString (Java 11+ "utility" classes) java.nio.file.Path fileName = Path.of("demo.txt"); String content = "hello world !!"; ºjava.nio.file.Files.writeString(fileName, content);º ← String to text-file String actual =ºjava.nio.file.Files.readString(fileName);º ← text-file to String System.out.println(actual); - java.nio.files.Files.lines: (Java 8+) private static String readLineByLine(String filePath) { final StringBuilder contentBuilder = new StringBuilder(); try ( final˂String˃ stream = ← stream resource must be closed java.nio.file.Files.lines( (with a try-with in this example) java.nio.file.Paths.get(filePath), java.nio.charset.StandardCharsets.UTF_8) ) { stream.forEach( s -˃ contentBuilder.append(s).append("\n") ); } catch( e) { ... } return contentBuilder.toString(); }
- JDK 1.8+
- "deprecates" java.util.(Date|Calendar|TimeZome)
- All the classes are IMMUTABLE and THREAD-SAFE
Oºimport java.time.Instant;º
Oºimport java.time.ZonedDateTime;º
Oºimport java.time.ZoneId;º
Oºimport java.util.concurrent.TimeUnit;º
OºInstantºBºtimestampº = OºInstantº.now();              // Create from system clock
          Bºtimestampº.plus(Duration.ofSeconds(10));    // Add 10 seconds

  │OºInstantº to String                 │ OºInstantº from String
  │(format with time-zone)              │ (parse string)
  │OºZonedDateTimeº zdt1 =              │
  │     OºZonedDateTimeº.of             │ String sExpiresAt="2013-05-30T23:38:23.085Z";
  │       (                             │ OºZonedDateTimeºzdt2 = OºZonedDateTimeº.parse(sExpiresAt);
  │         2017, 6, 30           ,     │
  │         1, 2, 3               ,     │ OºInstantºi1 = OºInstantº.from(zdt1),
  │         (int) TimeUnit.             │           i2 = OºInstantº.from(zdt2);
  │               MILLISECONDS.         │
  │               toNanos(100),         │
  │         ZoneId.of("Europe/Paris")   │
  │       );          ^^^               │
  │     Ex: "Z","-02:00","Asia/Tokyo",..│
  │String s1 = zdt1.toString();         │

date  (none)           DateFormat.getDateInstance(DateFormat.DEFAULT, getLocale())
      short            DateFormat.getDateInstance(DateFormat.SHORT, getLocale())
      medium           DateFormat.getDateInstance(DateFormat.DEFAULT, getLocale())
      long             DateFormat.getDateInstance(DateFormat.LONG, getLocale())
      full             DateFormat.getDateInstance(DateFormat.FULL, getLocale())
      SubformatPattern new SimpleDateFormat(subformatPattern, getLocale())

time  (none)           DateFormat.getTimeInstance(DateFormat.DEFAULT, getLocale())
      short            DateFormat.getTimeInstance(DateFormat.SHORT, getLocale())
      medium           DateFormat.getTimeInstance(DateFormat.DEFAULT, getLocale())
      long             DateFormat.getTimeInstance(DateFormat.LONG, getLocale())
      full             DateFormat.getTimeInstance(DateFormat.FULL, getLocale())
      SubformatPattern new SimpleDateFormat(subformatPattern, getLocale())

ºCompatibility with Java ˂=1.7º
- (java.util.) Date, Calendar and TimeZone
  "buggy" classes/subclasses were used.
  - Calendar class was NOT type safe
  - Mutable non-threadsafe classes
  - Favored programming errors
    (unusual numbering of months,..)

- Next compatibility conversion methods were added in 1.8:
  - Calendar.toInstant()
  - GregorianCalendar.toZonedDateTime()
  - GregorianCalendar.from(ZonedDateTime) (Using default local)
  - Date.from(Instant)
  - Date.toInstant()
  - TimeZone.toZoneId()

ºjava.time. Package summaryº
Clock              A clock providing access to the current instant, date and
                   time using a time-zone.
Duration           A time-based amount of time, such as '34.5 seconds'.
Instant            An instantaneous point on the time-line.
LocalDate          A date without a time-zone in the ISO-8601 calendar system,
                   such as 2007-12-03.
LocalDateTime      A date-time without a time-zone in the ISO-8601 calendar
                   system, such as 2007-12-03T10:15:30.
LocalTime          A time without a time-zone in the ISO-8601 calendar system,
                   such as 10:15:30.
MonthDay           A month-day in the ISO-8601 calendar system, such as --12-03.
OffsetDateTime     A date-time with an offset from UTC/Greenwich in the ISO-8601
                   calendar system, such as 2007-12-03T10:15:30+01:00.
OffsetTime         A time with an offset from UTC/Greenwich in the ISO-8601
                   calendar system, such as 10:15:30+01:00.
Period             A date-based amount of time in the ISO-8601 calendar system,
                    such as '2 years, 3 months and 4 days'.
Year               A year in the ISO-8601 calendar system, such as 2007.
YearMonth          A year-month in the ISO-8601 calendar system, such as 2007-12
ZonedDateTime      A date-time with a time-zone in the ISO-8601 calendar system,
                   such as 2007-12-03T10:15:30+01:00 Europe/Paris.
ZoneId             A time-zone ID, such as Europe/Paris.
ZoneOffset         A time-zone offset from Greenwich/UTC, such as +02:00.

Enum               Description
DayOfWeek          A day-of-week, such as 'Tuesday'.
Month              A month-of-year, such as 'July'.

Exception          Description
DateTimeException  Exception used to indicate a problem while calculating a date-time.

Java 9
- A number of parsing and formatting changes have been incorporated in Java 9 to
bring the functionality closer to Unicode Locale Data Markup Language (LDML).
These changes have been supervised by Stephen Colebourne, creator of the popular
 date-time library JodaTime, precursor of the new java.time component in Java 8.
Abiding by the Unicode standard will provide better interoperability with other
non-Java systems.

- LDML is the language used by the Unicode Common Locale Data Repository (CLDR),
  a project of the Unicode Consortium to gather and store locale data from
  different parts of the world, enabling application developers to better adapt
  their programs to different cultures. Among other things, LDML deals with dates,
  times, and timezones, and more particularly with date formatting and parsing.
  The following is an extract of new features coming in Java 9 that bring java.time
  closer to the LDML specification:

  - JDK-8148947, DateTimeFormatter pattern letter ‘g’: the letter ‘g’, as
    specified in LDML, indicates a “Modified Julian day”; this is different from a
    normal Julian day in the sense that a) it depends on local time, rather than GMT,
    and b) it demarcates days at midnight, as opposed to noon.
  - JDK-8155823, Add date-time patterns 'v' and 'vvvv’: ‘v’ and ‘vvvv’ are LDML
    formats to indicate “generic non-location format”, e.g. “Pacific Time”, as
    opposed to the “generic location format” with specifies a city, like
    “Los Angeles Time”.
  - JDK-8148949, DateTimeFormatter pattern letters ‘A’, ’n’, ’N’: although LDML
    doesn’t specify formats ’n’ and ’N’, it does specify ‘A’, but the current
    behaviour in Java doesn’t match that of the spec. ‘A’ is meant to represent the
    total number of milliseconds elapsed in the day, with variable width, but
    currently Java treats this as fixed with: if ‘AA’ is specified as a pattern, it
    will fail to parse any value that is further than 99 milliseconds in the day.
    ’n’ and ’N’ are just Java extensions to the standard to represent nanoseconds
    within the second, and nanoseconds within the day, respectively.
  - JDK-8079628, java.time DateTimeFormatter containing "DD" fails on three-digit
    day-of-year value: similar to the previous problem, but with ‘D’ representing
    days within a year. If one specifies “DD” as a pattern, it will fail to parse
    “123” as the 123th day of the year.
- As previously mentioned, a better alignment with the LDML will ease
  interoperability across systems, since there are multiple technologies that
  have adopted the LDML to some degree. Microsoft .NET uses LDML for general
  interexchange of locale data, and there are packages available for Node.js
  and Ruby, just to mention a few.
- JDK 1.5+
- Represents time durations at a given unit of granularity and
  provides utility methods to convert across units, and to perform
  timing and delay operations in these units.

  void      sleep(long timeout)
  void  timedJoin(Thread thread, long timeout)
  void   timedWait(Object obj, long timeout)
ºpredefined annotation types in java.lang:º
- @Deprecated
- @Override
- @SuppressWarnings
- @SafeVarargs (SDK 1.? +) applied to a method/constructor,
                           asserts that the code does not perform
                           potentially unsafe operations
                           on its varargs parameter.
                           removing  related warnings

ºAnnotation types are a form of interfaceº
DECLARATION(interface is preceded by the @ sign) │ USAGE
  @Documented                                    │
  @interface ClassPreamble {                     │   @ClassPreamble (
     String   author        ()              ;    │      author         = "John Doe"      ,
     String   date          ()              ;    │      date           = "3/17/2002"     ,
     int      currentRev    () default 1    ;    │      currentRev     = 6               ,
     String   lastModified  () default "N/A";    │      lastModified   = "4/12/2004"     ,
     String   lastModifiedBy() default "N/A";    │      lastModifiedBy = "Jane Doe"      ,
     String[] reviewers     ()              ;    │      reviewers      = {"Alice", "Bob"}
  }                                              │   )
                                                 │ public class Generation3List extends Generation2List {
                                                 │     // ...
                                                 │ }
new @Interned MyObject();              ← Class instance creation expression

myString = (@NonNull String) str;      ← Type cast (1.8+)

class UnmodifiableList˂T˃ implements   ← implements clause
      @Readonly List˂@Readonly T˃
      { ... }

void monitorTemperature() throws       ← throws exception declaration
@Critical TemperatureException { ... }

@SuppressWarnings(value = "unchecked") ← Predefined standard annotations
void myMethod() { ... }
@SuppressWarnings({"unchecked", "deprecation"})
void myMethod() { ... }
(Annotations applying to other annotations)

RetentionPolicy.SOURCE: retained only in source (ignored by the compiler)
RetentionPolicy.CLASS : retained by compiler    (ignored by the JVM)
RetentionPolicy.RUNTIME:retained by JVM, can be queried at Runtime

º@Documentedº                     º@Repeatableº
- indicates that whenever the     - (1.8+)
  specified annotation is used    - targeted annotation can be applied
  those elements should be          more than once to the same
  documented using the Javadoc      declaration or type use.
  tool. (By default, annotations    Ex:
  are not included in Javadoc.)     @Author(name = "Jane Doe")
                                    @Author(name = "John Smith")
                                    class MyClass { ... }

º@Targetº                          º@Inheritedº
º(field,type,class..)º             - targeted annotation type can be inherited
- restrict targeted java-language    from the super class. (false by default.)
  elements where the annotation      When the user queries the annotation type
  can be applied:                    and the class has no annotation for this
  - ElementType.ANNOTATION_TYPE      type, the class'superclass is queried for
  - ElementType.CONSTRUCTOR          the annotation type.
  - ElementType.FIELD
  - ElementType.LOCAL_VARIABLE
  - ElementType.METHOD
  - ElementType.PACKAGE
  - ElementType.PARAMETER
  - ElementType.TYPE (1.8+)
SLF4j Logging
Simple Log Facade or abstraction for various logging frameworks
(e.g. java.util.logging, logback, log4j) allowing the end user
to plug in the desired logging framework at deployment time.

  ˂?xml version="1.0" encoding="UTF-8"?˃
    ˂root level="ALL"˃                                  ← Apply to all packages/levels
      ˂!-- ˂jmxConfigurator /˃ --˃
      ˂appender Bºname="APPENDER_FILE"ºclass="ch.qos.logback.core.rolling.RollingFileAppender"˃
          ˂rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"˃
              ˂!-- daily rollover --˃
              ˂!-- keep 2 days' worth of history capped at 1MB total size --˃
            ..(see encoder for APPENDER_STDOUT ..)

      ˂appender Gºname="APPENDER_STDOUT"ºclass="ch.qos.logback.core.ConsoleAppender"˃
          ˂pattern˃%d{HH:mm:ss.SSS} | %-5level | %thread | %logger{1} |
    %m%n%rEx{full,                                      ←☞ filter "Noise" in stack trace. ºREF 1º
              java.lang.reflect.Method,                 ← remove Java reflection
              org.apache.catalina,                      ← remove catalina engine
              org.springframework.aop,                  ← remove "almost" whole Spring framework
    ,             ←
              org.springframework.transaction,          ←
              org.springframework.web,                  ←
              net.sf.cglib,                             ← remove CGLIB classes.
              ByCGLIB                                   ←

    ˂root level="WARN"˃                                 ← Aply to all packages/WARN+ logs
        ˂appender-ref Bºref="APPENDER_FILE"  º/˃
        ˂appender-ref Gºref="APPENDER_STDOUT"º/˃

    ˂logger name=""          level="INFO" /˃ ← Detail level for packages
    ˂logger name="" level="DEBUG"/˃
    ˂logger name="org.eclipse.jetty"    level="WARN" /˃

  ºREF 1º: @[]


    ˂artifactId˃logback-classic˂/artifactId˃            ← add Bºlogbackº facade
        ˂groupId˃org.slf4j˂/groupId˃                    ←  Avoid error next start-up:
        ˂artifactId˃slf4j-jdk14˂/artifactId˃               "SLF4J: Class path contains multiple SLF4J bindings."
      ˂/exclusion˃                                         "   slf4j-jdk14-1.7.21.jar!...StaticLoggerBinder.class"
  Rº˂/exclusions˃º                                         "logback-classic-1.1.7.jar!...StaticLoggerBinder.class"

BºExample Ussageº
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;
    class MyClass {
      private static final Logger log =
      if (log.isDebugEnabled()) {  ←······················· composed string "Lorem ipsum..."is built
          log.debug("Lorem ipsum... @{} {}",  ←············ before calling log.debug.
             "param1", "param2");                           "wrapper" if(log.isDebugEnabled())  avoid
      }                                                     unnecesary string processing (saving can
      ...                                                   be "huge" if log.debug is inside a loop.
MDC: "Better Way of Logging"
RºPROBLEMº: How do we related logs together originating
            from a (single user, single data-flow)
            that are processed by different threads, HTTP
            requests (think of Single Page Apps),
            or component?
GºSOLUTIONº: Use mapped diagnostic context (MDC).

BºMapped Diagnostic Contextº:
  - Built into the logging framework,
  - supported by log4j, log4j2, and SL4J/logback.
  - MDC allows to capture custom ºkey/valueº diagnostic data,
    accessible to the appender when the log message is actually
  - MDC structure isºinternally attachedºto the executing threadº
    in the same way a ThreadLocal variable would be.

BºMDC How To:º
  - At the start of the thread, fill MDC with custom information
    (MDC API also allows to remove info later on if it doesn't apply)
  - Log the message
  - MDCºSummarized APIº:
    public class MDC {
      publicºstaticºvoid   put   (String key, String val); // ← Add to ºcurrent Threadº
      publicºstaticºString get   (String key);                         ºContext Mapº
      publicºstaticºvoid   remove(String key);
      publicºstaticºvoid clear(); // ← Clear all entries
    NOTE: child threads does not automatically inherit a copy of
           the current diagnostic context.

  - Best pattern for microservices:
    - Ex:
      //ºSTEP 1:ºOverride Qºinterceptor layerº
      //                """single place where call
      //                   execution passes through""".
      public class ServiceInterceptor

          private final staticºLogger LOGGERº=

          public boolean preHandle(
                   HttpServletRequest request,
                   HttpServletResponse response,
                   Object object) throws Exception {
             MDC.put("userId"    , request.getHeader("UserId"   ));
             MDC.put("sessionId ", request.getHeader("SessionId"));
             MDC.put("requestId" , request.getHeader("RequestId"));

      //ºSTEP 2:ºChange log appender pattern to retrieve variables
      //         stored in the MDC.
      ˂appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"˃
      ˂Pattern˃%X{userId} %X{sessionId} %X{requestId} - %m%n˂/Pattern˃

      Log output will look some like:
      17:53:25,496 http─8080─20 INFO Service1.execute(77)  U1001 ┌ sessId01    ┌ reqId_1_1   req service 1
      17:53:25,497 http─8080─26 INFO Service1.execute(77)  U1002 │ sessId02 ┐  │ reqId_2_1┐  req service 1
      17:53:25,550 http─8080─26 INFO Service1.execute(112) U1002 │ sessId02 ┤  │ reqId_2_1┤  Req data
      17:53:25,555 http─8080─20 INFO Service1.execute(112) U1001 ├ sessId01 │  ├ reqId_1_1│  Req data
      17:53:25,617 http─8080─27 INFO Service2.execute(50)  U1001 ├ sessId01 │ ┌│ reqId_1_2│  req service 2
      17:53:25,615 http─8080─27 INFO Service2.execute(89)  U1001 ├ sessId01 │ │├ reqId_1_1│  req data
      17:53:25,637 http─8080─29 INFO Service2.execute(50)  U1002 │ sessId02 ┤ ││ reqId_2_2│┐ req service 2
      17:53:25,665 http─8080─29 INFO Service2.execute(89)  U1002 │ sessId02 ┤ ││ reqId_2_1┤│ req data
      17:53:25,568 http─8080─20 INFO Service1.execute(120) U1001 ├ sessId01 │ └│ reqId_1_2││ req OK
      17:53:25,584 http─8080─26 INFO Service1.execute(120) U1002 │ sessId02 ┤  │ reqId_2_1┘│ req OK
      17:53:25,701 http─8080─27 INFO Service2.execute(113) U1001 ├ sessId01 │  └ reqId_1_1 │ req OK
      ...          ...          ...  ...                   ...   : ...      :    ...       : ...
      17:53:25,710 http─8080─29 INFO Service2.execute(113) U1002   sessId02 ┘    reqId_2_2 ┘ req OK
Security 101
1. Make code immutable:
   · Tag variables as 'final' by default.
     (a final variable is not variable but constant)
   · Try to initialize all classes in constructors. and centralize
     all input checks in the constructor. Raise a runtime exception
     if the constructor is not happy with input data.
     (vs using getters and setters).
      Doing so warrants that a class is properly instantiated and
     safe upon constructor exit.
      This also warrants fail-fast applications. If something is
     undefined at startup time constructors will not be able to
     initialize and the application will fail to start: This is
     life-saving for DevOps and normal deployments.

2. Parameterize SQL:
   query = "SELECT ... WHERE lastname = "Rº+ parameterº;  // ← Incorrect:
   Statement         stm01 = con.createStatement();

   query = "SELECT ... WHERE lastname = ?";               // ← Correct
   PreparedStatement stm01 = con.prepareStatement(query);
   statementBº.setString(1, parameter);º

   ...  stm01.executeQuery(query);

3. Use OpenID Connect with 2FA:
   OpenID summary: OAuth 2.0 extension providing user information.
   · It adds an ID token in addition to an access token plus
     a /userinfo endpoint to retrieve additional information
     plus endpoint discovery and dynamic client registration.

   low-code OpenID in Spring:
   STEP 1: Addd next dependencies:

   STEP 2: Spring Configuration
   │ spring:
   │  ...
   │  security:
   │    oauth2:
   │      client:
   │        registration:
   │         ºgithub:º
   │            client─id: ...
   │            client─secret: ....
   │         ºokta:º
   │            client─id: ...
   │            client─secret: ...
   │            client─name: ...
   │         ºgoogle:º
   │            ...
   │        provider:
   │          okta:
   │            issuer─uri:

4. Scan dependencies for known vulnerabilities
   Ej.: Use a service like  @[] !!!

5. Handle sensitive data with care
   - Use secrets on-demand and remove from memory as soon as possible.
     Ideally manage secrets through HSM. Secrets (signature keys, password,
     ...) must never leave the HSM.
   - If using Lombok generated toString(), mark with @ToString.Exclude
     sensitive fields.

6. Sanitize all input.
   Example 1, sanitizing HTML
   · Add next dependency:
   · final String
      untrustedInput = "˂script˃ alert(1); ˂/script˃",
        trustedInput = StEncode.forHtml(untrustedInput));

7.  Configure your XML-parsers to disable XXE (eXternal Entity)
    ˂?xml version="1.0" encoding="UTF-8" standalone="yes"?˃
      ˂!DOCTYPE bar [
           ˂!ENTITY xxe SYSTEM Rº"file:///etc/passwd"º˃] ˃      ← attack

    e.g. In xerces 1/2 disable external entities and doctypes like:
         factory   = SAXParserFactory.newInstance();
         saxParser = factory.newSAXParser();
           "" ,ºtrueº);

8. Avoid Java serialization as much as possible
   • Java serialization is also called Rº“the gift that keeps on giving”º.
     Oracle is planning to eventually remove Java serialization as part 
     of Project Amber.

     If you really need to implement serializable on your domain entities,
     implement its own readObject().

     private final void
     readObject(ObjectInputStream in) throws {
       // check 1
       // check 2
       throw new"Deserialized not allowed");

     If you need to Deserialize an inputstream yourself, you should use an
     ObjectsInputStream with restrictions.
     e.j Apache Commons IO ValidatingObjectInputStream, that checks whether
         the object being deserialized is allowed or not.

         final FileInputStream fileInput = new FileInputStream(fileName);
         ValidatingObjectInputStream in = new ValidatingObjectInputStream(fileInput);
         Foo foo_ = (Foo) in.readObject();

     Object deserialization can also apply to JSON, XML, ...

9. Use strong encryption and hashing algorithms.
   TIP: Prefer Google Tink  (vs Low Level Java crypto libraries)

10. Enable the Java Security Manager                                       [security.jvm]
   · By default, JVM imposes no restrictions to running apps.
     (file system, network, ..)
     · Ex. By default the Attach API is active allowing to easely
       change bytecode of running apps. (from inside the machine).

   · Activate it like:
   $º$ java ...          º ← Use default policy
   $º$ java ... \        º ← Use custom  policy
   $º º   ← == replace default.
                                                       =  expand  default.
   More info at
   java.awt.AWTPermission                        java.sql.SQLPermission                        java.util.logging.LoggingPermission                java.util.PropertyPermission
   java.nio.file.LinkPermission                                 javax.sound.sampled.AudioPermission

11. Centralize logging and monitoring
(parameters) -˃ expression
(parameters) -˃  { statements; }

// takes a Long, returns a String
Function˂Long, String˃ f = (l) -˃ l.toString();

// takes nothing, gives you Thread
Supplier˂Thread˃ s = Thread::currentThread;

//  takes a string as the parameter
Consumer˂String˃ c = System.out::println;

// use lambdas in streams
new ArrayList˂String˃().stream()....

// peek: Debug streams without changes
peek ( e -˃ System.out.println(e)). ...

// map: Convert every element into something
map ( e -˃ e.hashCode())...

// filter (hc -˃ (hc % 2) == 0) ...

// collect all values from the stream

• TODO:, functional extensions to Java 8.
  • Context: Pains with checked exceptions and lambdas:
  • Solution: org.jooq.lambda.Unchecked

    Standard Java 8:                 | Using jooq labmda wrapper:     | Even simpler:
    ================                 | ==========================     | ===================
    Arrays                           | Arrays                         | Arrays
    .stream(dir.listFiles())         | .stream(dir.listFiles())       | .stream(dir.listFiles())
    .forEach(file -˃ {               | .forEach(                      | .map(Unchecked.function(
          try {                      |   Unchecked.consumer(file -˃ { |      File::getCanonicalPath))
      System.out.println(            |     System.out.println(        | .forEach(System.out::println);
           file.getCanonicalPath()); |       file.getCanonicalPath());
          } catch (IOException e) {  |   })
      throw new RuntimeException(e); | );
- JDK 1.8+
- Incomplete but good enough to cover the "shape" of many lambda expressions and
 method references representing abstract concepts like functions, actions, or predicates
- The @FunctionalInterface is used to capture design intent (not needed by compiler).
- In documenting functional interfaces, or referring to variables typed as
  functional interfaces, it is common to refer directly to those abstract concepts,
  for example using "this function" instead of "the function represented by this object".
- Each functional interface has a single abstract method, called the functional method for that
  functional interface, to which the lambda expression's parameter and return types are matched or adapted.
- Functional interfaces can provide a target type in multiple contexts, such as assignment context, method invocation,
  or cast context:
  |Predicate˂String˃ p = String::isEmpty;           // Assignment context
  |stream.filter(e -˃ e.getSize() ˃ 10)...          // Method invocation context
  | e -˃ e.getSize())...  // Cast context

Defined functions in 1.8
           Interface Summary                │           Interface Description
                  BiConsumer‹T,U›           │opt. accepting two input arguments and returns no result
  (|Double|Int|Long)Consumer‹T›             │opt. accepting a single (Object|double|int|long)input argument and returns no result
Obj(Double|Int|Long)Consumer‹T›             │opt. accepting an object-valued and a (double|int|long)-valued argument, and returns no result
        (|Double|Long|Int)Function‹(T,)R›   │func. that accepts an (T|double,long,int) argument and produces a result
       (|Double|Long)ToIntFunction          │func. that accepts a (T|double|long)argument and produces an int-valued result
(ToDouble|ToLong|ToInt|)BiFunction‹(T,)U,R› │func. that accepts two arguments and produces an (T,double,long,int) result.
           To(Double|Long)Function‹T›       │func. that produces a (double|long)-valued result
(Int|Long|Double)To(Int|Long|Double)Function│func. that accepts a (int|long|double) argument and produces a (int|long|double) result
 (|Int|Long|Double)UnaryOperator‹T›         │op. on a single (T|int|long|double) operand that produces a result of the same type
(Double|Long|Int|)BinaryOperator‹T›         │op. upon two (T|int|long|double) operands and producing a result of the same type
                BiPredicate‹T,U›            │predicate (boolean-valued function) of two arguments
(|Int|Long|Double)Predicate‹T›              │predicate (boolean-valued function) of one (T|int|long|double) argument
(|Boolean|Int|Long|Double)Supplier(‹T›)     │supplier of (T|Boolean|Int|long|double) results
Collection Decission Tree
                                  │  Allows  │
                    ┌─── YES ─────┤Duplicates├──  NO  ───────┐
                    │   List to   └──────────┘  Set to       │
                    │  be selected              be selected  │
                    │                                        v
                    v                                    ┌───────────┐☜ order established at
        ┌─────────────────────┐                          │ Maintains │  write time
        │  Unknown number     │                          │ºINSERTIONº│
   ┌─NO─┤of elements will be  ├YES─┐           ┌───YES───┤  ºORDERº? ├──NO──┐  order requested
   │    │added and/or index   │    │           │         └───────────┘      │  at read time
   │    │based search will not│    │           v                            ↓  ☟
   │    │be frequent?         │    │     QºLinkedHashSetº           ┌────────────┐
   │    └─────────────────────┘    │                                │ Mantains   │
   v                               v                           ┌─NO─┤ºREAD ORDERº├YES┐
BºArrayListº           BºLinkedListº                           │    │(alpha,...)?│   │
                                                               │    └────────────┘   │
                                                               │                     │
                                                               v                     v
                                                          QºHashSetº           QºTreeSetº

Standard Rºnon-concurrentº SDK:
       │                                IMPLEMENTATIONS
       │ Hash Table        │ Resizable Array   │Balanced Tree │ Linked List │ HashTable+LinkedList
       │                   │                   │              │             │
│˂Set˃ │ HashSet           │                   │  TreeSet     │             │ LinkedHashSet
│      │                   │                   │              │             │
│˂List˃│                   │ ArrayList         │              │ LinkedList  │
│      │                   │ Vector            │              │ LinkedList  │
│˂Map˃ │ HashMap,Hashtable │                   │  TreeMap     │             │ LinkedHashMap

RºWARNº: There is a huge performance difference LinkedList and ArrayList.
         - when there is a large number of add/remove operations LinkedList is much faster.
         - When there is a lot of random access operations ArrayList is much faster.


│Collection       │ Thread-safe                ┃          YOUR DATA              ┃           OPERATIONS    ALLOWED       │
│                 │ alternative                ┃─────────────────────────────────┃───────────────────────────────────────┤
│class            │                            ┃Individu│Key-val.│Duplica│Primite┃ Iteration Order │Fast │ Random Access │
│                 │                            ┃elements│  pairs │element│support┃FIFO │Sorted│LIFO│'has'│By  │By   │By  │
│                 │                            ┃        │        │support│       ┃     │      │    │check│Key │Val  │Idx │
│HashMap          │ ConcurrentHashMap          ┃        │YES     │       │       ┃     │      │    │YES  │ YES│     │    │
│                 │                            ┃        │        │       │       ┃     │      │    │     │    │     │    │
│SortedMap        │ ?                          ┃        │YES     │       │       ┃     │      │    │?    │ YES│     │    │
│                 │                            ┃        │        │       │       ┃     │      │    │     │    │     │    │
│NavigableMap @1  │ ?                          ┃        │YES     │       │       ┃     │      │    │?    │ YES│     │    │
│                 │                            ┃        │        │       │       ┃     │      │    │     │    │     │    │
│HashBiMap(Guava) │ Maps.syncrhonizedBiMap     ┃        │YES     │       │       ┃     │      │    │YES  │ YES│YES  │    │
│                 │ (new HashBiMap())          ┃        │        │       │       ┃     │      │    │     │    │     │    │
│ArrayListMultimap│ Maps.synchronizedMultiMap  ┃        │YES     │YES    │       ┃     │      │    │YES  │ YES│     │    │
│   (Guava)       │ (new ArrayListMultimap())  ┃        │        │       │       ┃     │      │    │     │    │     │    │
│LinkedHashMap    │ Collections.syncrhonizedMap┃        │YES     │       │       ┃YES  │      │    │YES  │ YES│     │    │
│                 │ (new LinkedHashMap())      ┃        │        │       │       ┃     │      │    │     │    │     │    │
│TreeMap          │ ConcurrentSkipListMap      ┃        │YES     │       │       ┃     │YES   │    │YES  │ YES│     │    │
│Int2IntMap       │                            ┃        │YES     │       │YES    ┃     │      │    │YES  │ YES│     │YES │
│(Fastutil)       │                            ┃        │        │       │       ┃     │      │    │     │    │     │    │
│ArrayList        │ CopyOnWriteArrayList       ┃YES     │        │YES    │       ┃YES  │      │YES │     │    │     │YES │
│HashSet          │ Collections.newSetFromMap  ┃YES     │        │       │       ┃     │      │    │YES  │    │YES  │    │
│                 │ (new ConcurrentHashMap())  ┃        │        │       │       ┃     │      │    │     │    │     │    │
│IntArrayList     │                            ┃YES     │        │YES    │YES    ┃YES  │      │YES │     │    │     │YES │
│(Fastutil)       │                            ┃        │        │       │       ┃     │      │    │     │    │     │    │
│PriorityQueue    │ PriorityBlockingQueue      ┃YES     │        │YES    │       ┃     │YES   │    │     │    │     │    │
│ArrayDeque       │ ArrayBlockingQueue         ┃YES     │        │YES    │       ┃YES  │      │YES │     │    │     │    │

 Collection class │  Random access by idx/key │ Search/Contains │ Instert
 ArrayList        │  O(1)                     │ O(n)            │ O(n)
 HashSet          │  O(1)                     │ O(1)            │ O(1)
 HashMap          │  O(1)                     │ O(1)            │ O(1)
 TreeMap          │  O(log(n))                │ O(log(n))       │ O(log(n))

@1 NavigableMap: SortedMap with additional methods for finding entries
                 by their ordered position in the key set.
                 So potentially this can remove the need for iterating
                 in the first place - you might be able to find the
                 specific entry you are after using the higherEntry,
                 lowerEntry, ceilingEntry, or floorEntry methods. The
                 descendingMap method even gives you an explicit method
                 of reversing the traversal order.

Graph Structures

Interface               Description
Graph˂N˃                An interface for graph-structured data, whose edges are
                        anonymous entities with no identity or information of their own.
MutableGraph˂N˃         A subinterface of Graph which adds mutation methods.
MutableNetwork˂N,E˃     A subinterface of Network which adds mutation methods.
MutableValueGraph˂N,V˃  A subinterface of ValueGraph which adds mutation methods.
Network˂N,E˃            An interface for graph-structured data, whose edges are unique objects.
ValueGraph˂N,V˃         An interface for graph- structured data, whose edges have associated
                        non-unique values.
final List˂String˃ myList =                  ← final forbid re-asigning the list,
      Arrays.asList("one", "two", "three");     but its content is still mutable

final List˂String˃ myInmmutableList =        ← inmutable version (thread safe)

String[] array01 = ...;
final      List˂String˃ readOnlyList  ← Inmutable ArrayList (static class inside Arrays),
       = Arrays.asList(arr);            lacks set(), get(), contains() or "adding" methods
                                        providing for a read-only view.

final ArrayList˂String˃ mutableList   ← Mutable list.
       = new ArrayList˂String˃(

Collection (Lists/Set/...) "Walk-over"
 for ( int idx = 0; idx ˂ºcollectionºOº.lengthº; idx++) { ← Alt 1: Java 1.0 for-walk over collection index.
   type array_element =ºcollectionºOº.get(idx);º          ← RºWARN:º Very slow for LinkedLists
 }                                                                   (Faster for other List implementations)

 for ( Iterator iterator =ºcollectionºOº.iterator()º;     ← Alt 2: for-walk over iterator. Preferred:
      Oºiterator.hasNext();º) {                                    safer when removing/modifying the collection
   type type = (type) Oºº;                          while iterating over it.

 for ( iterable_type iterable_element Oº:collectionº) {   ← Alt 3: Best option when NOT remov./modify. elements

 collectionº.forEachº((element) -˃ {                      ← Alt 4: STREAMS (Java 8+): UNIX pipe like iteration.
   System.out::println(element)                             Functional approach. In this case we can "shortcut" to:
 });                                                        collection.forEach(System.out::println);

Maps ("Dictionaries") Bº######################º Bº# INITIALIZING A MAP #º Bº######################º final Map˂String,String˃ myMap = ← Mutable map (WARN: even if 'final' used) new HashMap˂String,String˃(); myMap.put("k1", "k2"); ... final Map˂String,String˃ myInmutableMap = ← Inmutable version of map Collections.unmodifiableMap(temp); final HashMap˂String,String˃ data = HashMapBuilder ← Java 7+ must have even number of elements .build( "k1","v1", "k2","v2", ...); final Map˂String, String˃ inumtableMap01 = ← Java 9+ must have even number of elements Map.of("k1", "v1", "k2", "v2"); final Map˂String, String˃ inmutableMap02 = ← Java 9+ (Safer syntax) Map.ofEntries( entry("k1", "k2"),...); final Map˂String, String˃ test = ← Guava ImmutableMap ImmutableMap.of("k1","v1", ...); works only with up to 5 key/value pairs final Map˂String, String˃ test = ← Guava ImmutableMap alternative ImmutableMap.˂String, String˃builder() .put("k1", "v1").put("k2", "v2") ... .build(); Bº###########################º Bº# WALK-OVER/ITERATE A MAP #º Bº###########################º Map˂String, String˃ map = ... for ( Map.pair˂String, String˃ Oºpairº : mapº.pairSet()º) { ← Alt 1: pairSet (Java 5+) ... pair.getKey() ... pair.getValue() ... } Iterator˂Map.Entry˂String, String˃˃ it = ← Alt 2: (Java 7?) using iterators map.entrySetº.iterator()º; while (it.hasNext()) { const Map.Entry˂Integer, Integer˃ pair =; ... pair.getKey() ... pair.getValue() ... } for ( Map.Entry˂Integer, Integer˃ pair : ← Alt 3: USING "for : entrySet" myMapº.entrySet()º) { ... pair.getKey() ... pair.getValue() ... } map.forEach((k, v) -˃ { ← forEach (java 8+) ... pair.getKey() ... pair.getValue() ... map.entrySet().stream().forEach( (pair) -˃ { ← Stream API (1.8+) ... pair.getKey() ... pair.getValue() ... "Functional" Unix Pipe like style } ); map.entrySet().stream(). ← Stream API parallel (1.8+) .parallel().forEach( pair -˃ { "Functional" Unix Pipe like style ... pair.getKey() ... pair.getValue() ... ) ); Bº##############º Bº# UPDATE MAP #º Bº##############º Map˂String, Double˃ map01 = new HashMap˂˃(); map01.put("key1", 1); map01.put("key2", 2); map01.put("key3", 3); ← Update (Before Java 8). Alt 1: Do not check for key existence. if (map01.containsKey("key3")) { map01.put("key3", 3); ← Update (Before Java 8). Alt 2: check for key existence. } Integer oldValue = map01 ← Java 8+ . Returns nulls if key1 didn't exists .replace("key1", 10); boolean isOK = map01.replace("key1", ← Java 8+. Safer variant. Update fails if old value 1 /*old value*/, is null or changed. Most probably false will throw 10 /* new value */ ); a RuntimeError. map01.getOrDefault("key4", 0)); ← Return default for non-existing key (vs null) [qa] map01.putIfAbsent ("key4", 4)); ← Update only if absent or value null. [qa] const BiFunction˂Integer, Integer, Integer˃ sumFun = (x1, x2) -˃ x1 + x2; map01.compute("key1", ← Use bifunction to update. NullPointerException (key, currentValue) if key doesn't exit. -˃ sumFun.apply(currentValue, 2)); (alternatively computeIfAbsent / computeIfPresent ) map01.merge("key1", defaultValue, ← Update with BiFunction if key exists. (key, currentValue) or add defValue to key otherwise. -˃ sumFun.apply(v, 2));
java.util.Collections (@[] ) - Utility class with static methods that operate on or return collections Collections.EMPTY_SET ( == Collections.emptySet() ) See also Collections.singleton(T o) Collections.EMPTY_LIST ( == Collections.emptyList() ) See also Collections.singletonList(T o) Collections.EMPTY_MAP ( == Collections.emptyMap() ) See also Collections.singletonMap(K key, V value) Collections.emptyEnumeration() Collections.emptyIterator() Collections.emptyListIterator() boolean Collections.addAll(Collection c, T... elements) Adds all elements to collection 'c' Queue Collections.asLifoQueue(Deque deque) deque to Last-In/First-Out "LIFO" Queue view int Collections.binarySearch(List list, T key) Searches key in list using binary search. int Collections.binarySearch(List list, T key, Comparator c) Searches key in list using binary search + comparator. Collection Collections.checkedCollection(Collection c, Class type) Returns a dynamically typesafe view of input collection/list/... List Collections.checkedList(List list, Class type) Map Collections.checkedMap (Map m, Class keyType, Class valueType) Set Collections.checkedSet(Set s, Class type) SortedMap Collections.checkedSortedMap (SortedMap m, Class keyType, Class valueType) SortedSet Collections.checkedSortedSet(SortedSet s, Class type) void Collections.copy(List dest, List src) Copies src list elements to dest list boolean Collections.disjoint(Collection c1, Collection c2) true if c1/c2 have no elements in common. Enumeration Collections.enumeration(Collection c) Returns an enumeration over the specified collection. void Collections.fill(List list, T obj) Replaces all of the elements of the specified list with the specified element. int Collections.frequency(Collection c, Object o) Returns the number of elements in the specified collection equal to the specified object. int Collections.indexOfSubList(List list, List sublist) -1 if not found. int Collections.lastIndexOfSubList(List list, List sublist) -1 if not found. ArrayList Collections.list(Enumeration e) Enum to array list in the order they are returned by input enum. T Collections.max/min(Collection coll (, Comparator comp)) max/min element of collection, according to comparator order.(Def to natural ordering) List Collections.nCopies(int nCopies, T inputObject) Set Collections.newSetFromMap(Map map) boolean Collections.replaceAll(List list, T oldVal, T newVal) ← NOTE: Replaces in list void Collections.reverse(List list) Comparator Collections.reverseOrder() Return ˂˂Comparable˃˃ Comparator imposing reverse natural ordering. Comparator Collections.reverseOrder(Comparator cmp) Returns comparator imposing reverse ordering of input comparator. void Collections.rotate(List list, int distance) Rotates elements in list. void Collections.shuffle(List list (, Random rnd) ) Randomly permutes elements using source of randomness (or def. src of randomness). void Collections.sort(List list (, Comparator c)) Sorts in comparator (def. to natural) order. void Collections.swap(List list, int pos1, int pos2) Swap elemnts in pos 1 and 2. Collection Collections.synchronizedCollection(Collection c) Returns thread-safe collection [qa] List Collections.synchronizedList(List list) Returns thread-safe list [qa] Map Collections.synchronizedMap(Map m) Returns thread-safe map [qa] Set Collections.synchronizedSet(Set s) Returns thread-safe set [qa] SortedMap Collections.synchronizedSortedMap(SortedMap m) Returns thread-safe sorted map [qa] SortedSet Collections.synchronizedSortedSet(SortedSet s) Returns thread-safe sorted set [qa] Collection Collections.unmodifiableCollection(Collection c) Returns inmutable view [qa] List Collections.unmodifiableList(List list) Returns inmutable view [qa] Map Collections.unmodifiableMap(Map m) Returns inmutable view [qa] Set Collections.unmodifiableSet(Set s) Returns inmutable view [qa] SortedMap Collections.unmodifiableSortedMap(SortedMap m) Returns inmutable view [qa] SortedSet Collections.unmodifiableSortedSet(SortedSet s) Returns inmutable view [qa]
- Fast and compact type-specific collections for Java
  Great default choice for collections of primitive types,
  like int or long. Also handles big collections with more than 2
  31 elements well.

Eclipse Collections
(Originated from Goldman Sachs gs-collection:
- Features you want with the collections you need
  Previously known as gs-collections, this library
  includes almost any collection you might
  need: primitive type collections, multimaps,
  bidirectional maps and so on.
˂˂Enumeration˃˃(1.0) vs ˂˂Iterator˃˃(1.7)

- both interfaces will give successive elements

- Iterators allow the caller to remove elements from
  the underlying collection during the iteration with
  well-defined semantics.
  (additional remove method)
- Iterators Method names have been improved.

- Iterators are fail-fast:
  - If thread A changes the collection, while
       thread B is traversing it, the iterator implementation
       will try to throw a ConcurrentModificationException
       (best effort since it can not always be guaranteed)
  - The fail-fast behavior of iterators can be used only to
    detect bugs sin the best effort doesn't warrant its trigger.
  - newer 'concurrent' collections will never throw it.
    Reading thread B will traverse the collection "snapshot" at
    the start of read.

-ºIterator should be preferred over Enumerationº
  taking the place of Enumeration in collections framework

  Enumeration     │ Iterator
  hasMoreElement()│ hasNext()
  nextElement()   │ next()
                  │ remove() ← optional: not implemented in many classes
NIO (1.4+)
- Replaced OLD blocking IO based on [ byte/char, read-or-write streams ]
┌──────────┐     ┌──────────────┐
├──────────┴───┐ ├──────────────┴────────────────────────────────┐
│ ─BºCHANNELS º│ │· a thread requests a channel the intention    │
│  ─ read/write│ │  to read/write data into a buffer:            │
│ ─BºBUFFERS  º│ │  · While the channel moves data into/from     │
│ ─BºSELECTORSº│ │   the buffer, the thread continues another job│
└──────────────┘ │  · When data is ready, the thread is notified │
Channel  : File,Datagram/UDP,Socket/TCP,ServerSocket,...
Buffer of: Byte|Char|Double|Float|Int|Long|Short|MappedByte)Buffer

│ ─ components like Pipe and FileLock can be considered               │
│   "utility classes" supporting the first three ones.                │
│                                                                     │
│ ─ "SELECTORS" objects monitor one+ channels for events              │
│   (connection opened, data arrived, ..):                            │
│   ─ Thus, a single thread can monitor multiple channels for data.   │
│     (Very handy if app has many connections/Channels/clients open   │
│     but with low traffic on each connection.                        │
│   ─ To use selectors:                                               │
│     ─ Instantiate the selector                                      │
│     ─ Register one+ channels with it                                │

│ºBUFFERº                                                                                          │
│ ºATTRIBUTESº                                            ºMETHODSº                                │
│          ┌─────────────────┬───────────────────────────┐ ┌─────────────┬───────────────────────┐ │
│          │ºwriteºmode      │ ºreadºmode                │ │rewind()     │                       │ │
│ ┌────────┼─────────────────┴───────────────────────────┤ │             │                       │ │
│ │capacity│ fixed size of memory block implementing     │ ├─────────────┼───────────────────────│ │
│ │        │ the buffer                                  │ │clear()      │                       │ │
│ ├────────┼─────────────────┬───────────────────────────┤ │compact()    │                       │ │
│ │position│ starts at 0,    │ starts at 0 (after "flip")│ ├─────────────┼───────────────────────│ │
│ │        │ increase at each│ increase at each          │ │mark()       │"bookmark position"    │ │
│ ├────────┼─────────────────┼───────────────────────────┤ │reset()      │ and return "bookmark" │ │
│ │   limit│ element written │ element read              │ ├─────────────┼───────────────────────│ │
│ │        │ == capacity     │ == last written position  │ │equals()     │using only the         │ │
│ └────────┴─────────────────┴───────────────────────────┘ │compareTo()  │remaining-to-read bytes│ │
│                                                          │             │for the computation    │ │
│                                                          └─────────────┴───────────────────────┘ │

│ºSEQUENCE TO READ/WRITE DATAº                  ┌───────┐
│try (  /* try-with 1.7+ */                     │SUMMARY│
│  RandomAccessFile GºaFileº =                  ├───────┴──────────────────────────
│    new RandomAccessFile("nio-data.txt", "rw") │-1 ) Write data into the Buffer
│) throws IOException {                         │-2 ) Call buffer.ºflip()º
│  FileChannel BºinChannelº =                   │     switch writing/reading mode
│    GºaFileº.getChannel();                     │-3 ) Read data out of the Buffer
│                                               │-4a) buffer.clear();  ← alt1: clear all buffer
│  ByteBuffer Oºbufº=                           │-4b) buffer.compact() ← alt2: clear only data read
│      ByteBuffer.allocate(48 /*capacity*/);    ├────────────────────────────────────
│                                               │ channelIn → (data) → buffer
│                                               │ buffer    → (data) → channelOut
│  int ºbytesReadº=                             └────────────────────────────────────
│       BºinChannelº.read(Oºbufº);  // ← Oºbufº now
│                                          in write mode
│Rºwhileº (ºbytesReadº != -1)
│    Oºbufº.ºflipº();               // ← Oºbufº now
│    while(Oºbufº.hasRemaining()){         in read mode
│        System.out.print(
│           (char) Oºbufº.get()     // ← alt.1: read 1 byte
│        );                                     at a time
│        // channel2.write(Oºbufº)  // ← alt.2: read data
│    }                                         in channel2
│    Oºbufº.clear();                // ← make buffer
│                                        ready-for-writing
│    ºbytesReadº = BºinChannelº     // ← Oºbufº now
│                    .read(Oºbufº);        in write mode

│ ºscattering channel readº                        │ºscattering-write to channelº                     │
│ - channel → read to → buffer1, buffer2, ....     │ - buffer1, buffer2, ...→ write to → channel      │
│ - Ex:                                            │ - ex:                                            │
│   ByteBuffer header = ByteBuffer.allocate(128);  │   ByteBuffer header = ByteBuffer.allocate(128);  │
│   ByteBuffer body   = ByteBuffer.allocate(1024); │   ByteBuffer body   = ByteBuffer.allocate(1024); │
│   ByteBuffer[] OºbufferArrayº = { header, body };│   ByteBuffer[] OºbufferArrayº = { header, body };│
│ Bºchannelº.read(OºbufferArrayº);                 │ Bºchannelº.write(OºbufferArrayº);                │
│            ^^^^                                  │                                                  │
│ fill up one buffer before moving on to the next  │                                                  │
│ (not suited for undefined size messages)         │                                                  │

 -ºIf one the the channels is FileChannelº:
   - FileChannelºtransferTo()/transferFrom()º can be used to move data between channels
   RºWARN:ºSome SocketChannel implementations may transfer only the data the SocketChannel
     has ready in its internal buffer here and now
   │  FileChannelGºfromChannelº=                 │  FileChannelGºfromChannelº=                 │
   │     (new RandomAccessFile("from.txt", "rw"))│     (new RandomAccessFile("from.txt", "rw"))│
   │     .getChannel(),                          │     .getChannel(),                          │
   │  FileChannel BºtoChannelº=                  │  FileChannel BºtoChannelº=                  │
   │     (new RandomAccessFile(  "to.txt", "rw"))│     (new RandomAccessFile(  "to.txt", "rw"))│
   │     .getChannel();                          │     .getChannel();                          │
   │  long count    =GºfromChannelº.size();      │  long count    = ;                          │
   │BºtoChannelºº.transferFromº(                 │GºfromChannelºº.transferToº(                 │
   │     GºfromChannelº,                         │      0 /*position*/,                        │
   │       0       , // dest-file to    │    GºfromChannelº.size() /*count*/,         │
   │                 // start writing from       │    BºtoChannelº);                           │
   │       maxCount  /* max-bytes to transfer*/  │                                             │
   │  );                ^^^^^^^^^                │                                             │
   │                    constrained by data      │                                             │
   │                    in source                │                                             │
API tree
           Bits ByteOrder CharBufferSpliterator
           HeapByteBuffer Heap(Byte|Char|...)Buffer(R) HeapCharBuffer

                    Channel Channels CompletionHandler FileLock MembershipKey Pipe Selector SelectionKey

                   Charset(|Decoder|Encoder) StandardCharsets
                   CoderResult CodingErrorAction

                          FileAttribute FileTime

                AccessMode CopyMoveHelper CopyOption DirectoryStream Files
                LinkOption LinkPermission Path        PathMatcher        Paths
                OpenOption    Standard(Copy|Open)Option
                Watchable      Watch(Event|Key|Service)

- A Selector allows a single thread to manage multiple channels
  (network connections), by examining which ones are ready for

- A channel that "fires an event" is also said to be "ready" for that event.

ºREGISTERING A SELECTORº                              │ºUSING SELECTORSº
ºAND ASSIGNING CHANNELSº                              │
    │  Selector BºselectoRº =;        │ ºSTEP 1º
    │  channel.configureBlocking(false);              │  call one of the select() methods
    │          ^^^^^^^^^^^^^^^^^^^^^^^^               │  (upon registering 1+ channels)
    │   //     non-blocking-mode required             │  int select(long mSecTimeout) ← block until channel/s ready
    │   // RºWARN:º FileChannel can NOT be switched   │             └────(optional)┘
    │   //   into NON-blocking mode and so            │  int selectNow()              ← Don't block even if none read
    │   //   they can NOT be used with selectors.     │  └┬┘
┌───→GºSelectionKey keyº = channel.register(          │  indicates how many channels became ready since last select() call.
│   │    Bºselectorº,                                 │
│   │    SelectionKey.OP_READ |                       │ ºSTEP 2º
│   │    SelectionKey.OP_WRITE);                      │  examine ready channels returned by select like:
│   │                 ^^^^^^^                         │  Set˂SelectionKey˃ selectedKeys =
│   │                 Or-set of interest:             │                    BºselectoRº.OºselectedKeys()º;
│   │                 OP_CONNECT / OP_ACCEPT          │  Iterator˂SelectionKey˃ keyIterator =
│   │                 OP_READ    / OP_WRITE           │                    selectedKeys.iterator();
│   │  ^^^^^^^^^^^^^^^^                               │  while(keyIterator.hasNext()) {
│   │                                                 │    GºSelectionKey keyº=;
│ ┌─→Gºkeyº.attach(extraInfoObject);                  │      //  "cast to proper channel"
│ │ │  Object attachedObj =                           │             if (Gºkeyº.isAcceptable ()) {
│ │ │     selectionKey.attachment();                  │        ... connection accepted by ServerSocketChannel
│ │ │                                                 │      } else if (Gºkeyº.isConnectable()) {
│ │ │                                                 │        ... connection established with remote server
│ │ │ // After selection                              │      } else if (Gºkeyº.isReadable   ()) {
│ │ │ // ^^^^^^^^^^^^^^^                              │        ... channel ready for reading
│ │ │ // explained later                              │      } else if (Gºkeyº.isWritable   ()) {
│ │ │                                                 │        ... channel ready for writing
│ │ │ // Alternative 1:                               │      }
│ │ │ int OºreadySetº= Gºkeyº.readyOps();             │      keyIterator.remove();
│ │ │ boolean isAcceptable  =                         │  }
│ │ │         OºreadySetº ⅋ SelectionKey.OP_ACCEPT;   │ ºSTEP 3º
│ │ │ ...                                             │Bºselectorº.close()
│ │ │ // Alternative 2:                               │            ^^^^^
│ │ │ Gºkeyº.isAcceptable();                          │   must be called after finishing ussage,
│ │                                                   │   invalidating all SelectionKey instances
│ └─── (optional) user attached object,               │   registered with this Selector.
│      handy way to recognize a given                 │   The channels themselves are not closed.
│      channel, provide extra info
│      (buffer/s,...)
└─── Gºkeyº can be queried like:
       intOºinterestSetº = Gºkeyº.interestOps()*;
       boolean isInterestedInAccept
           = OºinterestSetº ⅋ SelectionKey.OP_ACCEPT;

  - A thread blocked by a call to select() can be forced to leave the select() method,
     even if no channels are yet ready by having a different thread call
     the BºselectoRº.ºwakeup()º method on the Selector which the first thread has
     called select() on:
     - The thread waiting inside select() will then return immediately.
     - If a different thread calls wakeup() and no thread is currently
       blocked inside select(), the next thread that calls select()
       will "wake up" immediately.
- Java NIO FileChannel: channel connected to a file allowing to
      read data from  and write data to a file.
- A FileChannel canNOT be set into non-blocking mode:
  It always runs in blocking mode

- Reading from FileChannel (Writting to buffer):
  |/* You cannot open a FileChannel directly,
  | * first you obtain a FileChannel via an (Input|Output)Stream or a RandomAccessFile
  | */
  |RandomAccessFile GºaFileº     = new RandomAccessFile("data/nio-data.txt", "rw");
  |// Reading from channel
  |try (  /* try-with 1.7+ */
  |  RandomAccessFile GºaFileº = new RandomAccessFile("data/nio-data.txt", "rw")
  |) throws IOException {
  |  FileChannel BºinChannelº = GºaFileº.getChannel();
  |  ByteBuffer Oºbufº = ByteBuffer.allocate(48 /* capacity*/);
  |  int ºbytesReadº = BºinChannelº.read(Oºbufº); // Oºbufº now in write mode
  |  while (ºbytesReadº != -1) {
  |    Oºbufº.flip();                            // Oºbufº now in read mode
  |    while(Oºbufº.hasRemaining()){
  |        // alt. read data directly, 1 byte at a time
  |        System.out.print((char) Oºbufº.get());
  |        // alt. read data in channel
  |        // anotherChannel.write(Oºbufº)
  |    }
  |    Oºbufº.clear(); //make buffer ready for writing
  |    ºbytesReadº = BºinChannelº.read(Oºbufº); // Oºbufº now in write mode
  |  }

- Writing to a FileChannel (reading from buffer)
  | String newData = "......" + System.currentTimeMillis();
  | ByteBuffer Oºbufº = ByteBuffer.allocate(48);
  | Oºbufº.clear();
  | Oºbufº.put(newData.getBytes());
  | Oºbufº.flip(); // change buffer from write to read
  | ºwhile(Oºbufº.hasRemaining()) {º channelOº.writeº(Oºbufº); º}º
  | channel.close();

- FileChannel Position
  | long pos = fileChannel.position(); // obtain current position
  | fileChannel.position(pos +123); // change position

   - If you set the position after the end of the file,
     and try to read from the channel, you will get -1
   - If you set the position after the end of the file,
     and write to the channel, the file will be expanded
     to fit the position and written data. This may result
     in a "file hole", where the physical file on
     the disk has gaps in the written data.

- FileChannel Size
  | long fileSize = fileChannel.size();
                            size of the file
                            connected to channel

- FileChannel (file) Truncate
  | fileChannel.truncate(1024 /*length*/);

- FileChannel Force:
  flushes all unwritten data from the channel and OS cache to the disk
  | channel.force(true /* flush also file meta-data like permissions....*/);
- Pipe: one-way data connection between two threads
  └"=="  source channel   ← One threads writes to sink
        +  sink channel   ← One threads reads from source
    ByteBuffer Oºbufº = ByteBuffer.allocate(48);

    Pipe pipe =;
    Pipe.SinkChannel sinkChannel = pipe.sink();
    String newData = "..." + System.currentTimeMillis();
    while(Oºbufº.hasRemaining()) { sinkChannel.write(Oºbufº); }

    To read from a Pipe you need to access the source channel. Here is how that is done:
    Pipe.SourceChannel sourceChannel = pipe.source();
    int ºbytesReadº = BºinChannelº.read(buf2);
  There are two ways a SocketChannel can be created:

  // Opening a SocketChannel
  SocketChannel socketChannel =;
  socketChannel.connect(new InetSocketAddress("", 80));

  // Reading (writing to buffer)
  ByteBuffer Oºbufº = ByteBuffer.allocate(48);
  int ºbytesReadº =ºbufº); // If -1 is returned, the end-of-stream is reached (connection is closed)

  // Writing to a SocketChannel
  String newData = "..." + System.currentTimeMillis();
  ByteBuffer Oºbufº = ByteBuffer.allocate(48);
  while(Oºbufº.hasRemaining()) { channel.write(Oºbufº); }

Non-blocking Mode
- socketChannelº.configureBlocking(false)º;
- Calls to connect(), read() and write() will not block
- In non-blocking mode connect() calls may return before
  the connection is established:
  - To determine whether the connection is established
    use finishConnect() like this:

  | socketChannel.configureBlocking(false);
  | socketChannel.connect(
  |   new InetSocketAddress("", 80));
  | while(! socketChannel.finishConnect() ){
  |     //wait, or do something else...
  | }

NOTE: non-blocking works much better with Selector's
ServerSocketChannel serverSocketChannel =;

serverSocketChannel.socket().bind(new InetSocketAddress(9999));

    SocketChannel socketChannel =
            serverSocketChannel.accept(); // in blocking mode waits until incoming connection arrives
    if(socketChannel != null /* always false in blocking mode */){
        //do something with socketChannel...

    //do something with socketChannel...

Datagram Channel
- Since UDP is a connection-less network protocol, you cannot just
  by default read and write to a DatagramChannel like you do from
  other channels. Instead you send and receive packets of data

  | DatagramChannel channel =;
  | channel.socket().bind(new InetSocketAddress(9999));
  | ByteBuffer Oºbufº = ByteBuffer.allocate(48);
  | Oºbufº.clear();
  | // WARN: if read data is bigger than buffer size remaining data is discarded silently
  | channel.receive(Oºbufº);
  | // Write to channel
  | String newData = "..." + System.currentTimeMillis();
  | Oºbufº.clear();
  | Oºbufº.put(newData.getBytes());
  | Oºbufº.flip();
  | // WARN:  No notice is received about packet delivery (UDP does not make any guarantees)
  | int bytesSent = channel.send(Oºbufº, new InetSocketAddress("", 80));
  | // Alternatively you can "Connect" to a Specific Address. Since UDP is connection-less,
  | // connecting to a remote address just means that the DatagramChannel can only send/receive
  | // data packets from a given specific address.
  | channel.connect(new InetSocketAddress("", 80));
  | int ºbytesReadº =ºbufº);
  | int bytesSent = channel.write(Oºbufº);
NonBlocking Server
- @[]
- @[]
- Non-blocking IO Pipelines:
read-write pipeline: ºchannelInº → selector → component → ... → componentN → ºchannelOutº
read-only  pipeline: ºchannelInº → selector → component → ... → componentN
write-only pipeline:                        component → ... → componentN → ºchannelOutº
Note: It is the component that initiates reading of data from the Channel via the Selector
read-pipeline read from stream/channelIn and split data into messages like:

Data   → Message → Message
Stream   Reader    Stream

-ºA blocking Message Reader/Writer is simpler, since itº
 ºhas never to handle situations where no data was readº
 ºfrom the stream, or where only a partial message wasº
 ºread from the stream and message parsing needs to beº
 ºresumed later.º
-ºThe drawback of blocking is the requirement of separateº
 ºthreads for each parallel stream, which is a problem if theº
 ºserver has thousands of concurrent connectionsº
- Each thread will take between 320K (32 bit JVM) and
  1024K (64 bit JVM) memory for its stack
- Queue messages can be used to reduce the problem. However,
  this design requires that the inbound client  streams
  send data reasonably often and input is processed fast.
   If the inbound client stream may be inactive for longer periods
  attached to hidden clients, a high number of inactive
  connections may actually block all the threads in the thread
  That means that the server becomes slow to respond or even
- A non-blocking IO pipeline can use a single thread to
  read messages from multiple non-blocking streams.
    When in non-blocking mode, a stream may return 0 or more
  bytes when you attempt to read data from it.
  When you call select() or selectNow() on the Selector it
  gives you only the SelectableChannel instances ("connected
  clients") that actually has data to read.

OºComponent ──→ STEP 1: select() ──→ Selector ←──┬─→ Channel1º
Oº    ↑                                │         ┼─→ Channel2º
Oº    └───← STEP 2: ready channels ←───┘         └─→ Channel3º

- Reading Partial Messages: Data sent by "ready" channels can
  contain fractions/incomplete messages:
  - The Message Reader looks needs to check if the data block
    contains at least one full message, adn storing partial ones.
    (maybe using one Message Reader per Channel to avoid mixing messages)
  - To store Partial Messages two design should be considered:
    - copy data as little as possible for better performance
    - We want full messages to be stored in consecutive byte to
      make parsing messages easier
  - Some protocol message formats are encoded using a TLV format
    (Type, Length, Value).
    Memory management is much easier since we known immediately
    how much memory to allocate for the message. No memory is
    wasted at the end of a buffer that is only partially used.
  - The fact that TLV encodings makes memory management easier is
    one of the reasons why HTTP 1.1 is such a terrible protocol.
    That is one of the problems trying to be fixed in HTTP 2.0 where
    data is transported in LTV encoded frames.
  - Writing Partial Messages: channelOut.write(ByteBuffer) in
    non-blocking mode gives no guarantee about how many of the
    bytes in the ByteBuffer is being written. The method returns
    how many bytes were written, so it is possible to keep track
    of the number of written bytes.
  - Just like with the Message Reader, a Message Writer is used
    per channel to handle all the details.
   (partial writes, message queues, resizable buffers, protocol aware tricks,...)

-ºAll in all a non-blocking server ends up with three "pipelines" itº
 ºneeds to execute regularly:º
  - The read pipeline which checks for new incoming data from
    the open connections.
  - The process pipeline which processes any full messages received.
  - The write pipeline which checks if it can write any outgoing
    messages to any of the open connections
Path (1.7+)
- Represents a file/directory path in the FS
- Similar to but with some minor differences.
// Ussage
import java.nio.file.Path;
import java.nio.file.Paths;

Path path = Paths.get("/var/lib/myAppData/myfile.txt");
System.out.println("Current dir:"+Paths.get(".").toAbsolutePath());
- java.nio.file.Files provides several methods for manipulating FS files/directories:
- uses Path instances:

boolean pathExists = ºFiles.existsº(pathInstance,
            new LinkOption[]{ LinkOption.NOFOLLOW_LINKS});

Path newDir = ºFiles.createDirectoryº(path);

ºFiles.copyº(sourcePath, destinationPath);
ºFiles.copyº(sourcePath, destinationPath, StandardCopyOption.REPLACE_EXISTING);

ºFiles.moveº(sourcePath, destinationPath, StandardCopyOption.REPLACE_EXISTING);


Files.walkFileTree(Paths.get("data"), new FileVisitor() {
  @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {

  @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {

  @Override public FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException {

  @Override public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {
Asyncrhonous FileChannel 1.7+
read/write data from/to files asynchronously

Path path = Paths.get("data/test.xml");
AsynchronousFileChannel fileChannel =, StandardOpenOption.READ);

// Reading Data, Alt 1: Via a Future
  Future operation =*ByteBuffeR*/buffer, 0 /* start position to read from */);
  while(!operation.isDone());  // WARN: not a very efficient use of the CPU
  byte[] data = new byte[buffer.limit()];
  System.out.println(new String(data));

// Reading Data Alt 2: Via a CompletionHandler, position, buffer, new CompletionHandler() {
    public void completed(Integer numBytesRead, ByteBuffer attachment) {
        // NOTE: attachment is a reference to the third parameter passed to .read read()
        System.out.println("numBytesRead = " + numBytesRead);
        byte[] data = new byte[attachment.limit()];
        System.out.println(new String(data));

    public void failed(Throwable exc, ByteBuffer attachment) { ...  }

// Writing data:
AsynchronousFileChannel fileChannel =, StandardOpenOption.WRITE);

// Writing Data: Alt 1: Via a Future
  Future operation = fileChannel.write(buffer, position);

// Writing Data: Alt 2: Via CompletionHandler
  fileChannel.write(buffer, position, buffer, new CompletionHandler˂Integer, ByteBuffer˃() {

      @Override public void completed(Integer result, ByteBuffer attachment) { /* ... */ }
      @Override public void failed   (Throwable exc , ByteBuffer attachment) { /* ... */ }
API REF: @[]
JEP: @[]

TODO: HTTPClient Quick intro:
- BºOkHTTP vsº
From @[]
 BºOkHTTP PROs over        º          RºOkHTTP CONs over        º
 Bºº          Rºº
 - built-in response cache.           - timeout like configuration can not be
 - web sockets.                         modified after singleton connection.
 - Simpler API.                       - Requires (small) extra non-JDK dependencies
 - Better defaults                      (okIO and okHTTP itself) in non-Android
 - easier to use efficiently.           deployments
 - Better URL model.
 - Android support. (RºHTTPClientº
 Rºsupported in Android?º)
 - Better cookie model.
 - Better headers model.
 - Better call model.
 - canceling calls is easy.
 - Carefully managed TLS defaults
   secure and widely compatible.
 - Retrofit compatibility
   (Brilliant API for REST).
 - eorks with okIO great library
   for data streams.
 - less code to learn
 - 1+ billion Android devices
   using it interlally
 - Standard in Android 5.0+ (API level 21+).



  import okhttp3.OkHttpClient;
  import okhttp3.Request;
  import okhttp3.Response;
  import okhttp3.MediaType;                         // ← For POST
  import okhttp3.RequestBody;                       // ← For POST
  OkHttpClient client = new OkHttpClient();
  Request request = new Request.Builder()
  try (
    Response res =
       client.newCall(request).execute()            // ← Exec. (GET) Request
  ) {

  final String MediaType =  "application/json; charset=utf-8";
  final String jsonBody = "{...}";
  RequestBody body = RequestBody.create(jsonBody,   //  ← POST: prepare body rquest
  Request request = new Request.Builder()
      .post(body)                                   //  ← POST: prepare body rquest
  try (
    Response res =
  ) {
    return res.body().string();

- "efficient by default".

- HTTP/2 support allows all requests to the same host to share a socket.
- Connection pooling reduces request latency (if HTTP/2 isn't available).
- Transparent GZIP shrinks download sizes.
- Response caching avoids the network completely for repeat requests.

- OkHttp perseveres when the network is troublesome: it will silently recover
  from common connection problems.
BºIf target service has multiple IP addresses OkHttp will attempt alternate
  addresses if the first connect failsº.
BºThis is necessary for IPv4+IPv6 and for services hosted in redundant data centersº.

   OkHttp supports modern TLS features (TLS 1.3, ALPN, certificate pinning). It
  can be configured to fall back for broad connectivity.

- request/response API is designed with ºfluent builders and immutabilityº
-.synch/sync+callback API.

- Ex: Balancing connections with OKHttp:

- Recipes:
complements java.nio
- makes it much easier to access, store, and process your data.
- It started as a component of OkHttp, the capable HTTP client
  included in Android. It's well-exercised and ready to solve new problems.
Debug remote (container) JVM
• STEP 1: inject next ENV.VAR into app JVM:
• STEP 2: Export debuggin port (e.g.: 8001)

• STEP 3: In JAVA IDE configure debugger to attach to remote machine
  (e.g., in IntelliJ , add remote debugger in Run/Debug Configurations)
$º$ man 1 jcmd º

- Sends diagnostic command requests to a running JVM.

- It must be used on the same machine on which the JVM is running and
have the same effective user and group identifiers that were used to
launch the JVM.

- Ussage Summary:
  $ jcmd [-l]  # ← print list of running Java PIDs.

  $ jcmd pid|main-class PerfCounter.print  ← Send diagnostic command PerfCounter.print to PID JVM
                                             $ jcmd help to see the list of available diagnostic command

  $ jcmd pid|main-class -f filename        ←  file from which to read diagnostic commands to send to JVM

  $ jcmd pid|main-class command[ arguments]

 $º$ jcmd $PID º
  → The following commands are available:
  → Compiler.CodeHeap_Analytics  → GC.class_histogram
  → Compiler.codecache           → GC.class_stats
  → Compiler.codelist            → GC.finalizer_info
  → Compiler.directives_add      → GC.heap_dump
  → Compiler.directives_clear    → GC.heap_info
  → Compiler.directives_print    →
  → Compiler.directives_remove   → GC.run_finalization
  → Compiler.queue

  → VM.class_hierarchy          → ManagementAgent.start
  → VM.classloader_stats        → ManagementAgent.start_local
  → VM.classloaders             → ManagementAgent.status
  → VM.command_line             → ManagementAgent.stop
  → VM.dynlibs
  → VM.flags                    → Thread.print
  → VM.log
  → VM.metaspace                → JFR.check
  → VM.native_memory            → JFR.configure
  → VM.print_touched_methods    → JFR.dump
  → VM.set_flag                 → JFR.start
  → VM.stringtable              → JFR.stop
  → VM.symboltable
  → VM.system_properties        → JVMTI.agent_load
  → VM.systemdictionary         → JVMTI.data_dump
  → VM.uptime
  → VM.version
  → help
Flight Recorder
- Free of use starting with Java 11+ and backported to OpenJDK 8u272+
- (JEP 328)
- created originally in 1998 by students from the Royal
  Institute of Technology in Stockhoml as part of the JRockit JVM
  distribution by Appeal Virtual Machines.
- built directly into the JDK, it Bºcan monitor performance accuratelyº.
  with about only Bº2% overhead(production friendly)º.
- accurate metrics to avoid mislead readers via safe points or sampling.
  avoiding common problems (@[#JVM_safepoints]) with sampling profilers.      [comparative]
  │ JRE ┌────────┐ │
  │     │ JFR    ←--- Output profiling to 'myRecording.jfr'
  │     │ engine │ │  - compact log of OºJVM eventsº: ~100.000 events with many stack traces: ~2-4MB
  │     └────────┘ │  - UseºJava Mission Control (JMC)º to read events.
  $ java ... Oº-XX:StartFlightRecordingº ... ← Alternatively launch JFR from JMC Visual IDE

  - JFR default metrics focus in JVM's  raw operations:
    (vs high-level metrics like request/response times)
    -ºadvanced garbage collection analysisº:
      Include garbage collection statistics, etails on what garbage
      was collected and who threw it away.
      allows developers to improve performance by:                   [performance]
       - identifying what to improve
       - realize when tunning GC is the wrong solution.

BºJava Mission Control (JMC):º
- @[]
- UI to analyze data collected by Flight Recorder and head dumps ºbuilt on top of Eclipse IDE.º
- overview of all locally running Java processes, statistics,
  heap dumps, flame view (ºshow the aggregate of stack traces for the selected eventsº),

- JMC 8+ RoadMap:
  - New allocation event (introduced in JDK 16).
  - Improve Flame Graph and Graph Views for memory ussage (vs CPU ussage).

- JMC 8: (2021-04)
  - new graphs and heap dump analysis by default.
  - Can also be used as a library for parsing/processing .jfr files.
  - JMX Console can be used to continuously monitor an environment,
    interact with MBeans, invoke jcmd diagnostics commands remotely,...

  - Flight Recording used to create a recording
  - Dump Heap        used to create a heap dump

  - low overhead in production environments:
  -ºJOverflow:ºplugin with advanced analysis of heap dumps, included by default.
               converted to the Standard Widget Toolkit (SWT).
               It also offers insights and optimizing hints to developers.
               like Hierarchical Treemaps, may be used to improve the heap usage of the application.
               - Ussage:
                 create and open a heap dump in the JMC application.

  - New Graph View:ºdirected graphºwhere each node contains an individual method.
    NOTE: WebKit required ($º# apt install libwebkit2gtk-4.0-devº), doesn't work (yet) on Windows.

  - first release of ºJMC Agentº:
    - Allows JFR events to be added declaratively to any codebase.
    - Events can be used to capture (fields, parameter,...) values.

  - Rules API 2.0: Allow to use intermediate results from other rules.

  - JFR Writer is introduced as a new core module.

CRaSH shell
• Features:
  ✓ Connect to any JVM running CRaSH through SSH, telnet or web.
  ✓ Monitor and/or use virtual machine resources:
    JMX, database access, threads, memory usage, ...
  ✓ Embed CRaSH and expose services via a command line interface.
  ✓ Hot reload provides rapid development.
  ✓ Officially embedded as Spring Boot remote shell.

Eclipse Mem.Analizer
"""he Eclipse Memory Analyzer is a fast and feature-rich Java heap
  analyzer that helps you find memory leaks and reduce memory consumption.

  Use the Memory Analyzer to analyze productive heap dumps with hundreds of
  millions of objects, quickly calculate the retained sizes of objects, see
  who is preventing the Garbage Collector from collecting objects, run a
  report to automatically extract leak suspects.

It can provide reports and warnings similar to:
  (REF: @[])
  The classloader/component "sum.misc.Launcher$AppClassLoader@0x123412"
  occupies 607,654,123(38,27%) bytes.
RºThe memory is accumulated in one instanceº of
  java.util.LinkedList$Entry loaded by 'system class loader'
[root@spark ~]# yum install systemtap systemtap-runtime-java

JAVA                                              SystemTap Profiling script
package com.premiseo;                             #!/usr/bin/env stap

import java.lang.*;                               global counter,timespent,t
import;                 probe begin {
import;                         printf("Press Ctrl+C to stop profiling\n")
class Example {                                     timespent=0
   public static void                             }
     loop_and_wait(int n)
         throws InterruptedException{             probe java("com.premiseo.Example").class("Example").method("loop_and_wait")
         System.out.println(                      {
            "Waiting "+n+"ms... Tick");             counter++
         Thread.sleep(n);                           t=gettimeofday_ms()
     }                                            }

   public static void main(String[] args) {       probe java("com.premiseo.Example").class("Example").method("loop_and_wait").return
      System.out.println("PID = "+                {
              ManagementFactory.                  }
                     getName().split("@")[0]);    probe end {
      System.out.println(                            printf("Number of calls for loop_and_wait method: %ld \n",    counter)
              "Press key when ready ...");           printf("Time Spent in method loop_and_wait: %ld msecs \n", timespent)
      try {                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        BufferedReader in =                       profiling loop_and_wait:
           new BufferedReader(                    counts number of times the
              new InputStreamReader(;  loop_and_wait method has been called,
        String next = in.readLine();              and the time spent in this method execution.
      } catch (IOException ioe) {

      try {
        for (int i=0;i˂10;i++) {
      } catch (InterruptedException ie) {
Fast Thread
• Features:
  ✓ Java Thread Dump Analyzer
  ✓ Troubleshoot JVM crashes, slowdowns, memory leaks, freezes, CPU Spikes
  ✓ Instant RCA (don't wait for Vendors)
  ✓ Machine Learning
  ✓ Trusted by 4000+ enterprises
  ✓ Free Service
• Features:
  ✓ machine learning guided Garbage collection log analysis tool.
    GCeasy has in-built intelligence to auto-detect problems in the JVM ⅋ Android
    GC logs and recommend solutions to it.
  ✓ Solve Memory ⅋ GC problems in seconds
  ✓ Get JVM Heap settings recommendations
  ✓ Machine Learning Algorithms
  ✓ Trusted by 4,000+ enterprises
  ✓ Free
  ✓ A perfect DevOps tool!
  ✓ Made by the developers, for the developers
libperfagent (perf agent)
Extracted from:
  "Apache Spark @Scale: A  production use case"
  ...  Tools we used to find performance bottleneck
  - Spark Linux Perf/Flame Graph support: Although the two tools
    above are very handy, they do not provide an aggregated view of CPU
    profiling for the job running across hundreds of machines at the same
    time. On a per-job basis, Bºwe added support for enabling Perf º
  Bºprofiling (via libperfagent for Java symbols) and can customize the º
  Bºduration/frequency of sampling. The profiling samples are aggregatedº
  Bºand displayed as a Flame Graph across the executors using our       º
  Bºinternal metrics collection framework.                              º
Uber JVM Profiler: Tracing at scale
Our JVM Profiler supports a variety of use cases, most notably making
it possible to instrument arbitrary Java code. Using a simple
configuration change, the JVM Profiler can attach to each executor in
a Spark application and collect Java method runtime metrics. Below,
we touch on some of these use cases:
- Right-size executor: We use memory metrics from the JVM Profiler
  to track actual memory usage for each executor so we can set the
  proper value for the Spark “executor-memory” argument.
- Monitor HDFS NameNode RPC latency: We profile methods on the
  class org.apache.hadoop.hdfs...ClientNamenodeProtocolTranslatorPB
  in a Spark application and identify long latencies on NameNode calls.
  We monitor more than 50 thousand Spark applications each day with
  several billions of such RPC calls.
- Monitor driver dropped events: We profile methods like
  org.apache.spark.scheduler.LiveListenerBus.onDropEvent to trace
  situations during which the Spark driver event queue becomes too long
  and drops events.
- Trace data lineage: We profile file path arguments on the method
org.apache.hadoop.hdfs...getBlockLocations and

Uber JVM Profiler provides a Java Agent to collect various metrics
and stacktraces for Hadoop/Spark JVM processes in a distributed way,
for example, CPU/Memory/IO metrics.

Uber JVM Profiler also provides advanced profiling capabilities to
trace arbitrary Java methods and arguments on the user code without
user code change requirement. This feature could be used to trace
HDFS name node call latency for each Spark application and identify
bottleneck of name node. It could also trace the HDFS file paths each
Spark application reads or writes and identify hot files for further

This profiler is initially created to profile Spark applications
which usually have dozens of or hundreds of processes/machines for a
single application, so people could easily correlate metrics of these
different processes/machines. It is also a generic Java Agent and
could be used for any JVM process as well.
Concurrent Programming
External Links
- Youtube Concurrency Classes:
@[] [RºES langº]
1uSec Thread sync
- If caches are so in-sync with one another, why do we need volatiles at all in
  languages like Java?

  That’s a very complicated question that’s better answered elsewhere, but
  let me just drop one partial hint. Data that’s read into CPU registers, is
  not kept in sync with data in cache/memory. The software compiler makes all
  sorts of optimizations when it comes to loading data into registers, writing it
  back to the cache, and even reordering of instructions. This is all done
  assuming that the code will be run single-threaded. Hence why any data that is
  at risk of race-conditions, needs to be manually protected through concurrency
  algorithms and language constructs such as atomics and volatiles.

 ☞In the case of Java volatiles, part of the solution is to force all
  reads/writes to bypass the local registers, and immediately trigger cache
  reads/writes instead. As soon as the data is read/written to the L1 cache, the
  hardware-coherency protocol takes over and provides guaranteed coherency across
  all global threads. Thus ensuring that if multiple threads are reading/writing
  to the same variable, they are all kept in sync with one another. And this is
  how you can achieve inter-thread coordination in as little as 1ns.

See also: fast Inter-thread communication:
- The story begins with a simple idea: create a developer friendly,
  simple and lightweight inter-thread communication framework without
  using any locks, synchronizers, semaphores, waits, notifies; and no
  queues, messages, events or any other concurrency specific words or
  Just get POJOs communicating behind plain old Java interfaces.
Concurrency Basics
- Concurrency problems arise from the desire to use CPU resources more efficiently. Non concurrent
  applications (single threaded/single process) are complete Touring machines that can potentially
  solve any problem with enough time and memory. In practice having a CPU assigned to single thread
  will be very inneficient since the CPU will stand-by while the thread is waiting for input/output
  operations. Also, many algorithms allows to split processed data in isolated regions that can be
  processed in parallel by different CPU/CPU cores
- Concurrent tries to solve the problem of multiple independents CPUs or threads accesing share
  resources (memory)
- Locks is the simples concurrent primite to protect code or data from concurrent
  access in situations where there are many threads of execution. Locks can be classified like:
  | According to lock ussage:
  |    Cooperative   A thread is encouraged (but not forced) to cooperate with other
  |                  threads by adquiring a lock before accessing the associated data
  |    Mandatory     a thread trying to access an already locked resource will throw
  |                  an exception
  | _________________________________________________
  | According to lock rescheduing strategy:
  |    Blocking      The OS block the thread requesting the lock and rescheduled another thread
  |    Spinlock      The thread waits in a loop until the requested lock becomes available.
  |                  It's more efficient if threads are blocked for very short time (smaller than
  |                  the time needed by the OS to reschedule another thread into the current CPU)
  |                  It's inneficient if the lock is held for a long time since a CPU core is
  |                  waisted on the spinlock loop
  | _________________________________________________
  | According to granularity: (measure of the ammount of data the lock is protecting)
  |    Coarse        Protect large segments of data (few locks). Results in less lock overhead
  |                  for a single thread, but worse performance for many threads running concurrently
  |                  (most thread will be lock-contended waiting for share resource access)
  |    Fine          Protect small amounts of data. Require more lock instances reducing lock

- Locks require CPU atomic instructions for efficient implementations suchs as
    "test-and-set", "fetch-and-add", or "compare-and-swap", whether there are blocking
    (managed by the OS) or spinlocks (managed by the thread)
- Uniprocessors can just disable interruptions to implement locks, while multiprocessors
  using shared-memory will require complex hardware and/or software support
-  ºMonitors wrap mutex-locks with condition variables (container of threads waitingº
   ºfor certain condition)º. They are implemented as thread-safe classes
-ºObject providing Mutual exclusion of threads to shared resourcesº
- simplest form of synchronization:
  alternatives include:
  - reads and writes of volatile variables
    typically used in applications when one thread will
    be making changes to the variables and the others all reading or
    consumers of the data. If you have multiple threads making changes to
    the data it will be best to stick with synchronized block or use
    java.util.concurrent library package.
    (volatile is actually simpler than monitors, but not universal)
    Important Points on Volatile Variables:
    - Volatile variables areºnot cached in registers or in cachesº:
     ºAll read and writes are done in main memory, never done thread-locallyº
    - Example Ussage: status flags used in spin loops
    - Volatile keywordºguarantees visibility and orderingº
  - use of classes in the java.util.concurrent package
- Monitors also have the ability to wait(block a thread) for a certain condition
  to become true, and signal other threads that their condition has been met
-ºMonitors provide a mechanism for threads to temporarily give up exclusive access inº
 ºorder to wait for some condition to be met, before regaining exclusive access and  º
 ºresuming their taskº
- each java object can be used as a monitor.
- Methods/blocks of code requiring mutual exclusion must be explicitly marked with the
Oºsynchronized keywordº:
  - The synchronized statement computes a reference to an object;
    it then attempts to perform a lock action on that object's monitor and does not
    proceed further until the lock action has successfully completed.
    After the lock action has been performed, the body of the synchronized statement
    is executed. If execution of the body is ever completed, either normally or abruptly,
    an unlock action is automatically performed on that same monitor.
  - RºWARNº: The Java programming language neither prevents nor requires detection
    of deadlock conditions.
- Instead of explicit condition variables, each monitor(/object) is equipped with
  a single wait queue in addition to its entrance queue.
- All waiting is done on this singleOºwait queueº and allOºnotify/notifyAllº
  operations apply to this queue.

ºmonitorº   enter
 ┌───┬─────── │ ──┐   - Wait sets are manipulated solely and atomically
 │  notified  v   │     through the methods
 │ ─────→         │    ºObject.waitº     : move     running thread    → wait-queue
 │   │        O   │    ºObject.notifyº   : move     thread  wait-queue → enter-queue
 │ O │        O   │    ºObject.notifyAllº: move all threads wait-queue → enter-queue
 │ O ├─────── │ ──┴─┐   Interrupt??      : put thread into to monitor enter-queue
 │ O │        v     │
 │  ←──wait   O     │  - In timed-waits  : internal action removes thread to enter-queue?
 │   │     (Running │                      after at least milliseconds plus nanoseconds
 └───┤      thread) │  - Implementations are permitted (but discouraged),
     │              │    to perform "spurious wake-ups"
     │    leave     │
     └────── │ ─────┘  O = Thread (Instruction Pointer + Stack Pointer + ...?)

• (java.util.concurrent) Since JDK 1.5
• Object allowing 1+ threads to wait until a
  1+ operations are completed in other threads.

• example ussages:
  -ºon/off latch or gate:º
    When initialized to "one", parallel processing threads will
    invoke "await" and standby waiting for a "control" thread to
    open the gate with countDown().
  -ºparallel thread synchonization:º (barriers, ...)
    When initialized to 2+, it can be used to make a thread
    wait until "N" processing threads complete their task,
    (or 1+ proccesing threads complet an action N times).

  - Threads calling countDown() can continue processing before count reach 0.
    Only threads invoquing await will wait.

- The first is a start signal that prevents any worker from
   proceeding until the driver is ready for them to proceed; The second
   is a completion signal that allows the driver to wait until all
   workers have completed.

  -  Ex. 1:
     class DriverThread { // ...

       class Worker implements Runnable {
         private final CountDownLatch OºstartSignalº;
         private final CountDownLatch BºdoneSignalº;
         Worker(CountDownLatch OºstartSignalº, CountDownLatch BºdoneSignalº) {
          Oºthis.startSignal = startSignalº;
          Bºthis.doneSignal  = doneSignalº;
         public void run() {
            try {
            OºstartSignalº.await();      // ← Wait for driver-thread to be ready.
            BºdoneSignalº.countDown();   // ← Decrease count.
            } catch (InterruptedException ex) {} // return;

         void doWork() { ... }

       void main() throws InterruptedException {
         CountDownLatch OºstartSignalº=
                          newºCountDownLatch(1);º// ← 1: Gate: Avoid workers to start before
                                                 //       driver-thread is ready.

         CountDownLatch BºdoneSignalº =          // ← N: Make driver thread wait until workers
                          newºCountDownLatch(N);º//      have completed.
                                                 //   Consider alsoºCyclicBarrierº
                                                 //   (reset after count),

         for (int i = 0; i ˂ thread_number ; ++i) {
           new Thread(                           // ← Setup workers in this (driver) thread.
                new Worker(
       OºstartSignalº.countDown();      // ← Decrease count. count cannot be reset.
       BºdoneSignalº.await();           // ← block until current count reaches zero
                                            Thread is released. Any subsequent invocations
                                            return immediately.

  -  Ex. 2:
     - divide problem into N parts
     - describe each part with a Runnable executing a portion,
     - queue all Runnables to an Executor.
     - When all sub-parts are complete, coordinating-thread will "pass" through await.

     class Driver2 { // ...
       class WorkerRunnable implements Runnable {
         private final CountDownLatch OºdoneSignalº;
            CountDownLatch OºdoneSignalº, ...) {
          Oºthis.doneSignal = doneSignalº;

         public void run() {
            try {
            } catch (InterruptedException ex) {} // return;

         void doWork() { ... }
       void main() throws InterruptedException {
         CountDownLatchOºdoneSignalº= new CountDownLatch(N);
         Executor e = ...

         for (int i = 0; i ˂ N; ++i) // create and start threads
           e.execute(new WorkerRunnable(OºdoneSignalº, i));

       OºdoneSignalº.await();           // wait for all to finish

Scheduling: Runnables|Callables Executors
• Basic Thread objects

   Java 1.0+           Java 1.5+
  ┌────────────┐       (java.util.concurrent)
  │+run()      │
  ┌────────────┐       ┌───────────────┐
  │Thread      │       │˂˂Callable˂V˃˃˃│
  │────────────│       │───────────────│
  │+run()      │       │+call()        │
  │+start()    │       └───────────────┘
  │+sleep()    │       Complements Thread, returning a result/exception
  │....        │       the "parent" thread triggering the Callable

• Executor interface ( Java 1.5+):

  *0: Executors : Utility Factory + utility methods for Executor,(Scheduled)ExecutorService,
                  ThreadFactory, Callable) It can also create a "wrapped" ExecutorService
                  disabling reconfiguration.
  ┌╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶ ˂˂CompletionService˃˃                        │
  ┆                Executors *0                         △                                    │
  ┆                  ┆┆┆┆      ˂˂Executor˃˃  ←╶╶╶╶╶╶╶┐  ┆                                    │
  ┆   ˂˂Callable˃˃ ←─┘┆┆┆      void execute(Runnable)┆  ┆                                    │
  ┆                   ┆┆┆           △                ┆  ┆                                    │
  ┆ ˂˂ThreadFactory˃˃←┘┆┆           │       ExecutorCompletionService ───┐                   │
  ┆                    ┆┆           │                                    ↓                   │
  ┆                    ┆┆           │                                ˂˂BlockingQueue˃˃       │
  ┆                    ┆└╶╶→ ˂˂ExecutorService˃˃◁╴╴╴╴╴╴╴┐                    │               │
  └→ ˂˂Future˃˃ ←╶╶╶╶╶╶┆╶╶╶╶╶╶╶╶╶┘  △ *1                ┆                    │               │
       △   △           ┆            │             AbstractExecutorSevice     │               │
       ┆   ┆           └╶╶╶╶╶╶→ ˂˂Scheduled─ *2      △   △                   │               │
       ┆   ┆                  ExecutorService˃˃      │   │                   │               │
    ┌╴─┘   └─╴╴╴╴╴┐                 △                │   │                   │               │
  FutureTask     ForkJoin           ┆  1.7+     ┌────┘   └──────┐            │               │
                 Task    ←╶╶╶╶╶╶┐   ┆  ForkJoinPool       ThreadPoolExecutor ◇               │
                 △  △           ┆   ┆           │               △            ◇               │
                 │  │           ┆   ┆           │               │            │               │
           ┌─────┘  └────┐      ┆   ┆           ◇               │        Rejected            │
       Recursive    Recursive   └╶╶╶┆╶╶╶╶╶ ForkJoin             │      ExecutionHandler      │
       Action       Task            ┆      WorkerThread         │                            │
                                    ┆                           │                            │
                                    └╴╴╴╴╴╴╴╴ ScheduledTreadPoolExecutor                     │
                                     ┌────────┴────────────────────────┘                     │
                                     └ Prefered to "old" java.util.timer:                    │
                                       • timer can be sensitive to system clock changes      │
                                       • timer has only one execution thread. Long-running   │
                                         task can delay other tasks. Sch.Thre.PoolEx.        │
                                         can be configured with "N" threads.                 │
                                       • runtime exceptions kill the Timer thread.           │
        CyclicBarrier Semaphore          Sch.Thre.Ex. catches them, allowing to handle       │
 CountDownLatch ┆ Phaser ┆   Exchanger   by overriding 'afterExecute' from ThreadPoolExecutor│
    └───────────┴──┬─┴───┴───────┘       Only Task throwing the exception will be canceled.  │
               TimeUnit enum                                                                 │

  *1: ExecutorService:    managed threads collection available to execute tasks
        ˂T˃ List˂Future˂T˃˃ invokeAll(Collection˂? extends Callable˂T˃˃ tasks[,long timeout, TimeUnit unit])
                            Executes given tasks. when ALL complete (or timeout expires), return result.
                      ˂T˃ T invokeAny(Collection˂? extends Callable˂T˃˃ tasks[,long timeout, TimeUnit unit])
                            Executes given tasks, Returns result of (first?) one completing
                            without exception, if any do before the given timeout elapses.
                    boolean isShutdown()  : true if this executor has been shut down.
                    boolean isTerminated(): true if all tasks have completed following shut down.
                       void shutdown()    : Init clean shutdown.
             List˂Runnable˃ shutdownNow() : Attempts non─clean shutdown
                    boolean awaitTermination(long timeout, TimeUnit unit)
                            · Blocks until ALL tasks have completed (after shutdown request)
                              or the timeout occurs, or the current thread is interrupted,
              ˂T˃ Future˂T˃ submit(Callable˂T˃ task): Submit Callable for execution
                  Future˂?˃ submit(Runnable task)   : Submit Runnable for execution
              ˂T˃ Future˂T˃ submit(Runnable task, T result)
    ussage Alt 1: use an implementation of the interface (ThreadPoolExecutor, ScheduledThreadPoolExecutor)
                  an Instance.execute(runnableInstance) to add a Runnable task to thread pool.
    ussage Alt 2: Use factory methods in the 'Executors' class:
                  Executors.newFixedThreadPool(int numThreads)
                  Executors.newCachedThreadPool() (← unbounded pool, automatic reclamation)
                  Executors.newScheduledThreadPool(int size)

 *2: ScheduledExecutorService:  schedule tasks periodically/after (absolute/relative) delay/period
     ˂V˃ ScheduledFuture˂V˃ schedule(Callable˂V˃ callable, long delay, TimeUnit unit)
         ScheduledFuture˂?˃ schedule(Runnable command    , long delay, TimeUnit unit)
                            · Creates + exec ScheduledFuture, enabled after delay.
         ScheduledFuture˂?˃ scheduleAtFixedRate   (Runnable command, long initDelay,
                                                   long period, TimeUnit unit)
                            schedules after initDelay + n*period (n=0,..)
                            Executions running longer than period overlap.
         ScheduledFuture˂?˃ scheduleWithFixedDelay(Runnable command, long initDelay,
                                                   long delay , TimeUnit unit)
                            Wait until termination. Then wait for 'delay' until next execution.
                            Executions running longer than delay will just "shift/delay" next executions.

  • Example: (import java.util.concurrent.*;)
    int numWorkers = 10;                                         EXAMPLE 1: Create Pool of Callables
    ExecutorService pool = Executors.newCachedThreadPool();    ← STEP 1) create new pool
    MyCallableThread workers[]  =  new MyCallable[numWorkers]; ← allocate array for Callables and Futures
    Future futures[] = new Future[numWorkers];
    for (int i = 0; i ˂ numWorkers; ++i) {
       workers[i] = new MyCallable(i+1);                       ← Initialize array
       futures[i] = pool.submit(workers[i]);
    for (int i = 0; i ˂ numWorkers; ++i) {
          try {
      futures[i].get();                                        ← Wait (Blocking) on each future.
          } catch (InterruptedException ex) { ...
          } catch (ExecutionException ex) {   ...  }

                                                                EXAMPLE 2: Create Pool of Workers
    ExecutorService pool =  Executors.newFixedThreadPool(10);   =================================
    MyWorker[] workers =  new MyWorker[numWorkers];
    for (int i = 0; i ˂ numWorkers; ++i)
      pool.execute(new MyWorker(i+1));                        ← Schedule new worker
    pool.shutdown();                                          ← Don't Forget

  *3 ForkJoinPool (1.7+)
     REF: @[]
     • ForkJoinPool scheduler is similar to "ExecutorService" but
     •ºForkº: Task (=="thread") that "Splits itself" into smaller subtasks, executing concurrently.
     •ºJoinº: End children tasks (=="threads") and merge results (if any).

                                   ┌Task04··· CPU1 ·········┐end
                     ┌Task02(fork)º┤º                 ┌────º┴º────┐
                     │             └Task05··· CPU2 ···┘end º^º    │
        Task01(fork)º┤º                                    ºjoin ˃├º Task ...CPU1 ...
                ^    │             ┌Task06··· CPU3 ······┐end ºvº │
                │    └Task03(fork)º┤º                    └────º┬º─┘
                │             ^    └Task07··· CPU4 ···········─┘
                │             │
                │             │
              - There is an overhead in forks and maintaing new threads.
                Forking makes sense only for long running tasks with
                intensive use of CPU
              - Task01, 02, 03 ºwaitº for subtasks to finish execution.

     •ºCreating new ForkJoinPoolº
       ForkJoinPool BºforkJoinPoolº = new ForkJoinPool(4);  Desired level of paralellism
                                                       └─── (Desired number of threads/CPUs)
     •ºSubmiting tasks to ForkJoinPool schedulerº is very similar
       to how it is done in the ExecutorService. We can submit:
        - QºRecursiveActionº: task not returning any result.
        - GºRecursiveTask  º: task     returning a   result.

     import java.util.ArrayList;                       │import java.util.ArrayList;
     import java.util.List;                            │import java.util.List;
     import java.util.concurrent.RecursiveAction;      │import java.util.concurrent.RecursiveTask;
     // QºCreating new RecursiveActionº                │// GºCreating new RecursiveTaskº
     public class QºMyRecursiveActionº                 │public class GºMyRecursiveTaskº
     extends QºRecursiveActionº {                      │extends GºRecursiveTaskº˂Long˃ {
       private long workLoad = 0;                      │  private long workLoad = 0;
       public MyRecursiveAction(long workLoad) {       │
           this.workLoad = workLoad;                   │  public MyRecursiveTask(long workLoad) {
       }                                               │      this.workLoad = workLoad;
                                                       │  }
       @Override                                       │
       protected void compute() {                      │  protected Long compute() {
         if(this.workLoad ˂ 16) {                      │    if(this.workLoad ˂ 16) {
           // 16 is a ºTunnable Threshold parameterº   │      // 16 is a ºTunnable Threshold parameterº
           // Do workload in current thread            │      // Process work in current thread
           return                                      │      return workLoad * 3;
         }                                             │    }
         List˂MyRecursiveAction˃ subtasks =            │    List˂MyRecursiveTask˃ subtasks =
            Arrays.asList                              │       Arrays.asList
            ( new MyRecursiveAction(this.workLoad / 2),│       ( new MyRecursiveTask(this.workLoad / 2),
              new MyRecursiveAction(this.workLoad / 2) │         new MyRecursiveTask(this.workLoad / 2)
            );                                         │       );
         for(RecursiveAction subtask : subtasks)       │    for(MyRecursiveTask subtask : subtasks)
           subtaskOº.fork()º;                          │        subtaskOº.forkº();
           //       ^^^^^^^                            │        //       ^^^^^^^
           //   Oºwork split into subtasks to beº      │        //   Oºwork split into subtasks to beº
           //   Oºscheduled for executionº             │        //   Oºscheduled for executionº
       }                                               │
                                                       │    long result = 0;
     }                                                 │    for(MyRecursiveTask subtask : subtasks) {
                                                       │        result += subtaskOº.joinº();
                                                       │    }
                                                       │    return result;
                                                       │  }
      ºUSSAGE:º                                        │ºUSSAGE:º
     QºMyRecursiveActionºmyRecursiveAction =           │GºMyRecursiveTaskºmyRecursiveTask =
          new MyRecursiveAction(24);                   │     new MyRecursiveTask(128);
                                                       │  long mergedResult =
     BºforkJoinPoolºOº.invoke(myRecursiveAction);º     │BºforkJoinPoolºOº.invoke(myRecursiveTask)º;
                                                       │  System.out.println("mergedResult = " + mergedResult);

     •RºForkJoinPool Detractorsº
Completable Future
• Future:            (Java 5+) Represent an asynchronous computation result.
  CompletableFuture: (Java 8+)
  · Extends Future with methods to combine and handle errors
  · Extends the CompletionStage interface
    · Contract for an asynchronous computation step that
      can be combined with other steps.
  · About 50 different methods for composing/combining/executing async computations.

•       ┌ LOCAL SYSTEM (Under Control) ─────────────────────────┐
        │               ┌·······(RPE Loop)··················┐   │
        │               ↓                                   │   │
        │   ºInput º  Parse    Create      Parse  Create  → ... │
  INPUT ···→ºThreadº:  Data → │Future│  →  Data →│Future│       │
   DATA │    RPEL Loop                                          │
        │                        │                  │           │
        │┌───────────────────────┘                  │           │
        ││ ┌────────────────────────────────────────┘           │
        ││ │ ºI/O   º   External  RºWait   º Handle    |Future| │
        ││ └→ºThreadº:  Request  →RºResposeº→Respose → .complete│
        ││                     ↓                                │
        ││   ºI/O   º          ·               ↑                │
        │└──→ºThreadº:         ·               ·                │
                               ·               · Response|RemoteeError
                               ·               · |Timeout
                           │REMOTE SYSTEM        │
                           │(Out of control)     │
• A Future that may be explicitly completed (setting its value and status),
  and may be used as a CompletionStage, supporting dependent functions and
  actions that trigger upon its completion.
• When two or more threads attempt to complete, completeExceptionally, or
  cancel a CompletableFuture, only one of them succeeds.

• ºBarriersº (OºallOfº):
   CompletableFuture˂Void˃[] future_list = new CompletableFuture[list.size()];
  int idx=0;"Connecting plugins ...");
  for (Object el : list) {
    final CompletableFuture˂Void˃
      connectFuture = new CompletableFuture˂˃();
    asyncMethod(connectFuture);  // ← async Method at some moment must call complete()
  return CompletableFuture.OºallOfº(future_list);

• Example: Using CompletableFuture as a Simple Future (no-arg constructor)
  create CompletableFuture instance,
  launch some computation in another thread,
  returns Future immediately.

  public Future˂String˃ calculateAsync() throws InterruptedException {
      Future˂String˃ result = new CompletableFuture˂˃(); // *1 or .

      Executors.newCachedThreadPool().submit(() -˃ {
          completableFuture.complete("Hello"); // alt.: completableFuture.cancel(false);
          return null;

      return completableFuture;
  *1: when the result of computation is known:
      Future˂String˃ result = CompletableFuture.completedFuture("Hello");

  Future˂String˃ completableFuture = calculateAsync();
  String result = completableFuture.get(); // .get() wait for completion/error
  assertEquals("Hello", result);              until second threads "completes" the future.

• Ex: CompletableFuture with Encapsulated Computation Logic 
   (runAsync -˂˂Runnable˃˃-, supplyAsync -˂˂Supplier˃˃-)
   ˂˂Supplier˃˃: generic functional interface with single method
                 (zero arguments, returns value)

  CompletableFuture˂String˃ future
    = CompletableFuture.supplyAsync(/* supplier lambda*/ () -˃ "Hello")
  .thenApply(/* "processor" lambda */ s -˃ s + " World") // ← returns CompletableFuture
  .thenAccept(/* consumer lambda */
     s -˃ System.out.println("Computation returned: " + s)).
  .thenRun(/* Runnable lambda*/ () -˃ System.out.println("Computation finished."));

• Combining Futures (monadic design pattern in functional languages)
  CompletableFuture˂String˃ completableFuture
    = CompletableFuture.supplyAsync(() -˃ "Hello")
            s -˃ CompletableFuture.supplyAsync(() -˃ s + " World"));
  assertEquals("Hello World", completableFuture.get());

• Ex: Execute two independent Futures and do something with their results:
  CompletableFuture future = CompletableFuture.supplyAsync(() -˃ "Hello")
        () -˃ " World"), (s1, s2) -˃ s1 + s2));

  assertEquals("Hello World", future.get());

• Ex: Execute two independent Futures and do nothing with result:
  CompletableFuture future = CompletableFuture.supplyAsync(() -˃ "Hello")
       () -˃ " World"), (s1, s2) -˃ log(s1 + s2));

• Ex: Running Multiple Futures in Parallel:
   -  wait for all to execute and process combined results

  CompletableFuture˂String˃ future1
    = CompletableFuture.supplyAsync(() -˃ "Hello");
  CompletableFuture˂String˃ future2
    = CompletableFuture.supplyAsync(() -˃ "Beautiful");
  CompletableFuture˂String˃ future3
    = CompletableFuture.supplyAsync(() -˃ "World");

  CompletableFuture˂Void˃ combinedFuture
    = CompletableFuture.allOf(future1, future2, future3);

  // ...


  String combined = Stream.of(future1, future2, future3)
    .map(CompletableFuture::join)  // ← join(): similar to get, but throws unchecked
                                                exception if future fails.
    .collect(Collectors.joining(" "));
  assertEquals("Hello Beautiful World", combined);

• Handling Errors: 

  We can NOT try-catch.

  CompletableFuture˂String˃ completableFuture
    =  CompletableFuture.supplyAsync(() -˃ {
           throw new RuntimeException("Computation error!");
        return "Hello ";
   .handle( (s, t) -˃ s != null ? s : "Hello, Stranger!"); //
   // result ┘  └ exception thrown? TODO:
  assertEquals("Hello, Stranger!", completableFuture.get());

  completableFuture.completeExceptionally(          Alternatively. Complete with Error
    new RuntimeException("Calculation failed!"));
  completableFuture.get();                          ← ExecutionException

• Async Methods
  - methods without Async-postfix run next execution blocking current thread.
  - async* without the Executor argument runs a step using the common
    fork/join pool implementation.

  - Ex.: process result of computation with a Function instance
    CompletableFuture completableFuture
      = CompletableFuture.supplyAsync(() -˃ "Hello");

    CompletableFuture˂String˃ future = completableFuture
      .thenApplyAsync(s -˃ s + " World"); // lambda is wrapped into ForkJoinTask instance
    assertEquals("Hello World", future.get());
Guava ListenableFuture
- Concurrency is a hard problem, but it is significantly simplified by
  working with powerful and simple abstractions. To simplify matters,
  Guava extends the Future interface of the JDK with ListenableFuture.

- """We strongly advise that you always use ListenableFuture instead
  of Future in all of your code, because:
  - Most Futures methods require it.
  - It's easier than changing to ListenableFuture later.
  - Providers of utility methods won't need to provide Future and ListenableFuture
      variants of their methods.

Listenable vs CompletableFutures
          ListenableFuture                           │               CompletableFuture
                                                     │ It is different from ListenableFuture in that it
                                                     │ can be completed from any thread
ListenableFuture listenable = service.submit(...);   │ CompletableFuture completableFuture =
  Futures.addCallback(listenable,                    │     new CompletableFuture();
                      new FutureCallback˂Object˃() { │ completableFuture.whenComplete(new BiConsumer() {
    @Override                                        │   @Override
    public void onSuccess(Object o) {                │   public void accept(Object o, Object o2) {
        //handle on success                          │       //handle complete
    }                                                │   }
                                                     │ }); // complete the task
    @Override                                        │ completableFuture.complete(new Object())
    public void onFailure(Throwable throwable) {     │
       //handle on failure                           │ When a thread calls complete on the task,
    }                                                │ the value received from a call to get() is
  })                                                 │ set with the parameter value if the task is
                                                     │ not already completed.

  ..."CompletableFuture is dangerous because it exposes ºcompleteº methods."
  ..."CompletableFuture would have been good if it extended Future
     and did not expore toCompletableFuture,... and they could have named
     it something meaningful like ChainableFuture "
DragonWell JDK with Coroutine Support
REF: @[]
- AdoptOpenJDK and Alibaba announced that the Dragonwell JDK will be
  built, tested, and distributed using AdoptOpenJDK's infrastructure.
  ...  Another interesting feature is the Wisp2 coroutine support.
     BºWips2  maps Java threads to coroutines instead of kernel-level threads:º.
       Many coroutines can be scheduled on a small number of core lines,
       reducing scheduling overhead.

     Wisp2 engine is similar in some respects to the aims of Project Loom
     but (unlike Loom) Bºit works out of the box on existing code by enablingº
   Bºit with these Java arguments:º
     $ java -XX:+UnlockExperimentalVMOptions -XX:+UseWisp2

    I/O intensive applications where tasks are blocked on events and
  then scheduled can benefit from the coroutine support. On the other
  side, RºCPU intensive applications will probably not benefit from º

• See also reddit thread "Loom vs DragoWell JV:":

Loom Project: Ligthweight threads @[] @[] -ºMISSION:º - make concurrency simple(r) again! - Threads, provided by Java from its first day, are a convenient concurrency construct Rºputting aside the separate question of communication among threads Rºwhich is being supplanted by less convenient abstractions because theirº Rºcurrent implementation as OS kernel threads is insufficient for meetingº Rºmodern demands, and wasteful in computing resources that are particularlyº Rºvaluable in the cloud.º - Project Loom will introduce BºFIBERS: - lightweight, JVM managed, efficient threadsº, - A fiber is composed of: -Gº1 schedulerº : Already in place for Java through the excellent scheduler ºForkJoinPoolº -Rº1 continuationº: To be implemented in Loom. .... The overhead of fibers is higher but still very low even when compared to async and monadic APIs, which have the disadvantage of introducing a cumbersome, infectious programming style and don’t interoperate with imperative control flow constructs built into a language. So aren’t fibers generators or async/awaits? No, as we have seen fibers are real threads: namely a continuation plus a scheduler. Generators and async/awaits are implemented with continuations (often a more limited form of continuation called stackless, which can only capture a single stack frame), but those continuations don’t have a scheduler, and are therefore not threads. RELATED: @[] @[] Ron Pressler discusses and compares the various techniques of dealing with concurrency and IO in both: - pure functional (monads, affine types) - imperative (threads, continuations, monads, async/await) and shows why delimited continuations are a great fit for the imperative style. Bio Ron Pressler is the technical lead for Project Loom, which aims to add delimited continuations, fibers and tail-calls to the JVM Quasar(Fibers) • fast threads for java and Kotlin @[] NOTE: To be superseded by Prj. Loom? Extracted from @[] My understanding is that Ron is currently busy working for/with Oracle on project Loom which should bring "native" Fiber/lightweight continuation support directly into JVM without the need of auxiliary library like Quasar.
External Links
Spring DONT's!!!
- If a interface has a single implementation and is going
  to be instantiated just once in a single line of code,
  do NOT use Spring dependency injection.
  - All static compiler safety meassures are lost, translating
    to runtime dangerous checks.
  - Use injection just when you have the intention to allow
    complex interchangable implementations or spring boot
    simplifies code, never when code get more complex and
  - Ex: A utility class with static methods is preferred to
    an injected spring bean  doing that same functionality.

Annotations Quick Sheet
@[]  ← TODO: Testing annotations,


    ANNOTATION      DESCRIPTION                                  LEVEL
                  |                                            |C|F|C|M|P
                  |                                            |L|I|O|E|A
                  |                                            |A|E|N|T|R
                  |                                            |S|L|S|H|A
                  |                                            |S|D|T|O|M
                  |                                            | | |R|D|S
                  |                                            | | |U| |
                  |                                            | | |C| |
   º@Autowired    º| "autowired by type", used to inject object |  x x x
                   | dependency implicitly .                    |
                   | - No need to be public.                    |
   º@Configurable º|inject properties of domain objects.        |x
                   |Types whose properties are injected without |
                   |being instantiated by Spring                |
   º@Qualifier    º| used to create more than one bean of the   |  x x x
                   | same type and wire only one of the types   |
                   | with a property, providing greater control |
                   | on the dependency injection process.       |
                   | - can be used with @Autowired annotation.  |
   º@Required     º|mark mandatory class members.               |  x x x
   º@ComponentScanº|Trigger scanning of package for the         |x
                   |@Configuration clases.                      |
   º@Bean         º|tag a method ºbean producerº which will be  |      x
                   |mananged by the Spring container.           |
   º@Lazy         º| Init bean/component on demand              |x     x
   º@Value        º|used to inject values into a bean's         |  x x x
                   |attribute from a property file, indicating  |
                   |a default value expression.                 |
   º@Import       º|                                            |
   º@DependsOn    º|                                            |

BºSPRING FRAMEWORK ANNOTATIONSº ANNOTATION DESCRIPTION LEVEL | |C|F|C|M|P | |L|I|O|E|A | |A|E|N|T|R | |S|L|S|H|A | |S|D|T|O|M | | | |R|D|S | | | |U| | | | | |C| | ------------------------------------------------------ º@EnableAutoConfigurationº | ------------------------------------------------------ º@Controllerº |Allows detection of component classes in | |the class path automatically and register | |bean definitions for the classes | |automatically. | ------------------------------------------------------ º@RestControllerº | |tag controller as RESTful (behaviour) that | |will behave as resources. | ------------------------------------------------------ º@ResponseBodyº|automatically convert returned object to a | |response body. | ------------------------------------------------------ º@RequestMappingº | |map requests URI to handler class/method | ------------------------------------------------------ º@RequestParamº|bind req.param to method param in controller| ------------------------------------------------------ º@PathVariableº|bind placeholder from URI to method.param |
IoC Summary
  - org.springframework.beans
    - @[]
    - Objects managed by Spring IoC
    - created with the configuration metadata.
    - Represented as ºBeanDefinition objectsº containing:
      - essentially "a recipe for creating one or more objects".
      - package-qualified class name: typically the actual implementation class.
      - behavioral configuration elements: scope, lifecycle callbacks,...)
      - References to other dependencies (or "collaborators")
      - Custom settings (setters).

    BºBest Patternsº
    - Bean metadata need to be registered as early as possible.
    RºWARN:º registration of new beans at runtime (live access to
      factory) is not officially supported and may lead to concurrent
      access exceptions  and/or inconsistent state in the bean container.

  - org.springframework.context.BeanFactory (Interace)
    - provides advanced config.mechanism for "any" type of object.
    └ org.springframework.context.ApplicationContext (Interface)
      - extends BeanFactory with "Enterprise Features"
      - represents the IoC container
      - easier integration with Spring's AOP features
      - message resource handling (for use in i18n)
      - event publication
      - application-layer specific contexts such as
      └ ClassPathXmlApplicationContext
      · ºApplicationContextº context =
      ·      new ClassPathXmlApplicationContext ( // Alt 1:
      ·       "services.xml", "daos.xml");
      ·       ^^^^^^^^^^^^^^^^^^^^^^^^^^
      └ FileSystemApplicationContext
      └ ...

  MyBeanClass myBean = contextº
         .getBeanº("idBeanDef", beanClass.class);

BºSpring History:º
  Spring 1.0+ → Spring 2.5+      → Spring 3.0+
  XML           Annotation-based   Java-based config

BºBean Metadataº
  -ºpackage-qualified class nameº
  -ºname º: (unique) "id" or (aliased) "name" in xml.
    -ºsingletonº per Spring IoC container (default)
    -ºprototypeº single bean definition to any number of object instances.
    - In web-aware ApplicationContext next scopes are available:
      - ºrequest    º: single bean for lifecycle of HTTP request
      - ºsession    º: single bean for lifecycle of HTTP Session.
      - ºapplicationº: Single bean for lifecycle of ServletContext.
      - ºwebsocketº  : single bean for lifecycle of WebSocket.

  -ºconstructor argsº: (Prefered to properties -setters-):
       ˂bean id="id01" class="x.y.Class01"/˃
       ˂bean id="id02" class="x.y.Class02"/˃

       ˂bean id="instance03" class="x.y.Class03"˃
         ˂constructor-arg ref="id01"/˃                   ← By class
         ˂constructor-arg type="int" value="3320"/˃      ← by type
         ˂constructor-arg name="year" value="2020"/˃     ← by param name
         ˂constructor-arg index="4" value="Hello World"/˃← by param index

         Note:- Bº˂idref˃ is prefered to property with value attribute (fails-faster)º
              - bean ºdepends-onº attribute can force initialization (and destruction) order

    - Let Spring resolve dependencies("collaborators") of a bean
      by inspecting the contents of the ApplicationContext.
      ("ref") autowire values:
      - no     : ref used, not recommended for complex     configs
      - byName : IoC looks for bean with matching name
      - byType : (in setter or constructor args)
                 autowired if exactly one bean of the property type exists in container.
                 throws error if more than one found
               RºWARNª: set to null if zero found.

      ☞ Note: "default-autowire-candidates" attribute in beans tag can limit autowire
              candidate globally with a CSV list of candidates: (*Repository,*Security,*Logging)

  -ºlazy-initº : false: force resolution and instantiate at startup.(default,recomended)
                 true : use for "big objects" to save memory.

- While weird and not recomeneded, external (to the container) objects can
  be registered like:
  BeanFactory bFactImpl = context.getBeanFactory()
                                 returns DefaultListableBeanFactory impl

BºMethod injectionº
  - Suppose singleton A needs to use ºnon-singletonº bean B
   ºon each method invocation on Aº

  - Alternative A: (RºDiscouraged, tied to Spring internalsº)
    beans A implements ˂˂ApplicationContextAware˃˃.
    getBean("B") to container requesting a
    (typically new) bean B instance.

  - Alternative B: Method Injection
    containers overrides managed bean A
    - class and method cannot be final
    - lookup methods won't work with factory methods
      and in particular not with @Bean methods in configuration
    - classes, since the container is not in charge of creating the instance
      in that case and therefore cannot create a runtime-generated subclass
      on the fly.
  - º@NonNullº       ← forces param|return value|field to be NON-null
  - º@Nullableº      ← allows param|return value|field to be     null
  - º@NonNullApiº    ← forces param|return value       to be NON-null at package level
  - º@NonNullFieldsº ← forces                    field to be NON-null at package level
      org.springframework.lang. package

  - Null and ºempty stringº values rules
    - empty arguments for properties,... convert to "" empty String.
    - ˂null/˃ element handles null values. Ex
      ˂property name="email"˃ ˂null/˃ ˂/property˃ ← email = null
      ˂property name="email"˃         ˂/property˃ ← email = ""
Spring+JPA+JWT summary
BºSwagger (OpenAPI) º [config] {{
- file: com/myComp/openApi/  {{
    package com.myComp.swagger;

    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;

    import springfox.documentation.service.ApiInfo;
    import springfox.documentation.service.ApiKey;
    import springfox.documentation.service.Contact;
    import springfox.documentation.spi.DocumentationType;
    import springfox.documentation.spring.web.plugins.Docket;
    import springfox.documentation.swagger2.annotations.EnableSwagger2;

  Bº@Configurationº                              ← [configuration] spring core:
                                                   mark class as defining Spring bean
                                                   so that Spring container process
                                                   it generating app.Beans.
  Bº@EnableSwagger2º                             ← [openapi]
    public class SwaggerConfig {
      public Docket api() {
        return new Docket(DocumentationType.SWAGGER_2)

      private ApiInfo apiEndPointsInfo() {
        return new ApiInfoBuilder()
          .contact(new Contact(SWAGGER_CONTACT_NAME, SWAGGER_CONTACT_URL, ""))

      private ApiKey apiKey() {
        return new ApiKey(AUTHKEY, AUTHORIZATION, HEADER);
BºJPA Configº


- file: com/myComp/jpa/    [persistence][jpa]
  package com.myComp.jpa;

  import java.sql.Timestamp;                         // ← SQL         friendly type
  import java.time.LocalDateTime;                    // ← Application friendly type

  import javax.persistence.AttributeConverter;
  import javax.persistence.Converter;

  @Converter(autoApply = true)
  public class LocalDateTimeAttributeConverter       // ← Fix impedance problems
  implements AttributeConverter                      //   DDBB types ←→ JAVA types
             ˂LocalDateTime, Timestamp˃ {

      public Timestamp convertToDatabaseColumn(      //   Java type → DDBB-column-type
        LocalDateTime locDateTime) {
          return Timestamp.valueOf(locDateTime);

      public LocalDateTime convertToEntityAttribute( //   Java type ← DDBB-column-type
        Timestamp sqlTimestamp) {
          return sqlTimestamp.toLocalDateTime();

- Spring Data schema summary:
  ┌─ SPRING BOOT APPLICATION ───────────────────────────────────────────────┐
  ·                            º┌─ SPRING DATA ───────────────────────────┐º·
  · ┌───────┐                  º│┌───────────┐                    ┌──────┐│º·
  · │Service ·· gets access to·º·→ Repository| ··· manages  ······→Entity|│º·
  · └───────┘    data using    º│└───────────┘    operations on   └──────┘│º·
  ·                            º└─────────────┬───────────────────────────┘º·
  ·                                           v                             ·
  ·                                  gets database                          ·
  ·                                connectivity  from                       ·
  ·                                           │                             ·
  ·                                           v                             ·
  ·                                    ┌────────────┐                       ·  ┌──────────┐
  ·                                    │ DDBB driver··· defines ··············→│ Database │
  ·                                    └────────────┘  integration          ·  └──────────┘
  ·                                                    With                 ·
  └──SPRING BOOT APPLICATION ───────────────────────────────────────────────┘

- file: com/myComp/jpa/                    [persistence][jpa]
  import org.springframework.stereotype.Repository;
Bº@Repositoryº                              //   @Repository: Spring-Data abstraction for data storage+retrieval
                                            //   independent of source (sql/nosql ddbbs, kafka, redis, message-queues, ...)
                                            //   Used to isolated Domain Layer (Business logic) from data persistence
Bº@Repositoryº                              // ← internals.
  public interface Entity1Repository
  extends JpaRepository˂Entity1, Long˃{     //   ← JpaRespository extends Repository with SQL like DB º*1º
                                            //     For standard CRUD operation CrudRepository could have been used.
                                            //     (JpaRespotory offers a more specialized one with extra?features) ← [TODO]
                     // └────────────┴─·········  Entity1 specifies the type-of-data, Long specifies the type-of-"@Id"
                                            //     When using R2DBC reactive alternative to JDBC, ReactiveCrudRepository
                                            //     will be used (Note: R2DBC looks not to be compatible with JPA anotations?)

    @Query(value =                          //   ← Custom query.
          " SELECT * "                      //
        + " FROM entity1 en1 "              //
        +   " JOIN entity2 en2 ON = en1.entity2_id "
        +   " JOIN entity3 en3 ON = en2.entity3_id "
        + "WHERE en2column2 = :col2Value "
        + "AND = :Entity3ID ", nativeQuery = true)
  OºEntity1 query1(Long entity3Id, String col2Value);º   // ← Developer's responsability is to define the interface.

    @Query(value =                          //   ← Custom query.
          " SELECT en2.* "
        + " FROM entity2 en2 "
        + " JOIN entity3 en3 ON en2.entity3_id = "
        + " WHERE en3.col5 = :column5Value",
        nativeQuery = true)
  Oºpublic List˂Entity2˃ linkOrganization(String id);º   // ←···┘

    void deleteById(Long id);               // ← standard nomenclature. Autogenerated º*2º.
                                                 standard CRUD. transactional by default.

                                            //  ºSpring-framework declarative transaction management:º
                                            //   for the non-CRUD-and-mutating method we must indicate whether
    @Transactional                          // ← it should be part of a transaction (single-unit-of-work). º*3º
    void deleteByColumn2(LocalDateTime id); // ← standard nomenclature. Autogenerated º*2º.


 *º1º: Maven / gradle will contain dependencies similar to
    GROUP_ID                  ARTIFACT_ID
    org.springframework.boot  spring-boot-starter-data-jpa    Compile/implementation dependency
    org.postgresql            postgresql                      runtime dependency (provided by server,...)

    Spring-data auto-generates the code for standard cases by justºusing standard nomenclatureºfor methods:

      BUILDING BLOCK      |[TODO]: Add detailed example/s
      =================== | ==================================
      · Action            | find, exists, delete, count
      · Limit             | One, All, First10
      · By                | -
      · Property expr.    | findByIsbnOrTitle,  findByIsbnAndTitle
      · Comparison        | findByTitleContaining, findByTitleEndingWith,
                          | findByDateLessThan
      · Ordering operator | orderByTitleAsc

 *º3º: @Transactional (optional) attributes can be:
  │ • rollbackFor,         : By default rollback on RuntimeException|Error.            │
  │   rollbackForClassName   To rollback also on CheckedExceptions do something like   │
  │                         @Transactional (                                           │
  │                            ...                                                     │
  │                            rollbackForClassName={"FileNotFoundException" ,"..."}   │
  │                                     rollbackFor={ FileNotFoundException.class,... }│
  │                          )                                                         │
  │                                                                                    │
  │ • noRollbackFor,       : Skip rollback / ignore exceptions indicated               │
  │   noRollbackForClassName                                                           │
  │                                                                                    │
  │ • readOnly             : true | *false . true triggers (Spring TX manager)         │
  │                          optimizations  [peformance]                               │
  │ • timeout              : defaults to underlying ddbb timeout.                      │
  │ • ...                                                                              │
  │                                                                                    │
  │                                                                                    │
  │                                                                                    │
  │ • isolation            : Defaults to default of underlaying database:              │
  │                                                                                    │
  │   • Common problems in (physically) distributed databases and/or central           │
  │     databases with parallel updates / reads by different clients where we          │
  │     can NOT warrant that lecture of state by a client will see the latest          │
  │     update by a parallel update TX.                                                │
  │                                                                                    │
  │               (time advances up → down)                                            │
  │                                                                                    │
  │         DIRTY        │  NON-REPEATABLE  │  PHANTOM                                 │
  │         READ         │    READ          │   READ                                   │
  │     ==============   │ ==============   │ ====================================     │
  │                      │                  │                                          │
  │     TX 1    TX 2     │   TX 1   TX 2    │ TX 1               TX 2                  │
  │     ·       ·        │   ·      ·       │ ·                  ·                     │
  │     read    ·        │  •read   ·       │ ·                RºSELECT * FROM T1º     │
  │     x=5     ·        │   x=5    ·       │ ·                Rº· WHERE num ˃ 10º     │
  │     write   ·        │   ·   Rº•read    │•INSERT INTO T1(num)·                     │
  │     x=7     ·        │   ·      x=5     │ · VALUES(20);      ·                     │
  │     ·    Rº•readº    │  •write  ·       │•COMMIT;            ·                     │
  │     ·    Rº x=7 º    │   x=7    ·       │                  RºSELECT * FROM T1º     │
  │     roll-   ·        │   ·   Rº•read    │                  Rº  WHERE num ˃ 10º     │
  │     bakc    ·        │          x=7     │                                          │
  │                      ·                  ·                                          │
  │                      ·                  ·                      Isolation levels    │
  │     Read Uncomminted │ Read Uncomminted │  Read Uncomminted  ┐ where corresponding │
  │                      │ Read    Commited │  Read    Commited  ├ read error may      │
  │                      │                  │  Repeatable read   ┘ (randomnly) occur   │
  │     Only the (very slow!!!) Serializable isolation level is 100% safe.             │
  │     e.g: @Transactional (isolation=Isolation.READ_COMMITTED)                       │
  │                                                                                    │
  │                                                                                    │
  │ • propagation: define how (java) business methods should place in the TX.          │
  │   · REQUIRED      : Reuse existing TX or create new. (Used for "inner" methods)    │
  │   · REQUIRES_NEW  : Always create new.                                             │
  │                     rollbacks will not be propagated to any calling methods with   │
  │                     another TX in course.                                          │
  │   · SUPPORTS      : execute within existing TX if any, otherwise  without new TX.  │
  │   · MANDATORY     : Fail if no existing TX is in place. Method is always part of   │
  │                     any other existing TX flow.                                    │
  │   · NOT_SUPPORTED : "Pause" any existing transaction and execute inner code.       │
  │                     (maybe a method for which we can not deterministically known   │
  │                      the output).                                                  │
  │   · NEVER         : Fail if an active TX exists.                                   │
  │   · NESTED        : If current TX exists, executes within nested transaction       │
  │                      otherwise behave like REQUIRED.                               │

  @application.yml will be similar to:
     port: 8001
     shutdown: gracefull
      username: admin
      password: admin
      url: jdbc:postgresql://localhost:5432/ddbb1
      hikari:                    ← Spring  Boot  uses  HikariCP  for connection pooling   [performance]
       connection-timeout: 5000    for details on connection pool tunning visit:
       maximum-pool-size: 20     @[]
       pool-name: app01-pool
       hibernate.ddl-auto:Qºcreate-dropº ← Just forºdev.purposesº. Forºproduction environmentsº
                                          ºdatabase schema is kept versionedº("a la git") with
                                           some tool like Flyway or Liquidbase. The spring boot
                                           is changed to:  hibernate.ddl-auto:Qºvalidateº
                                           The start-up Spring-data will get sure that internal
                                           JPA models match the externally defined SQL schema.

- file: com/myComp/jpa/                     [persistence][jpa]
  import javax.persistence.*;
  import com.fasterxml.jackson.annotation.JsonIgnore;

  ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─
  @Table(name = "entity1")             // ← By default JPA mast to lowercase
  public class Entity1                 //   classname. use @Table to override
  @EntityListeners                     // ← Mark class to listen for: º*1º
    (AuditingEntityListener.class)     //   ← Specify a callback listener to process event
  implements Serializable {            //     data (timestamps, "who did it",
    private static final                            num. of modifications, ...).
       long serialVersionUID = 1L;
    public Entity1() { }               // ← Empty constructor required by JPA

    // ---- id ----
    @JsonIgnore                        // ← Class can also be used for Json
    @Id                                // ← @Id required by JPA Entities
    @GeneratedValue(strategy =
    @Column(name="id", length=20)
    private Long id;

    public void setId(Long id) { = id; }
    public Long getId() { return id; }

   º@Column(name = "column2")º
    private LocalDateTime column2;

    @ManyToOne(fetch = FetchType.LAZY)   //º ---- entity2 ---- º
      name = "entity2_id",
      referencedColumnName = "id",
      insertable = false,
      updatable = false)
    private Entity2 entity2;

    @JoinColumn(                         //º ---- entity3 ---- º
      name = "entity3_id",
      referencedColumnName = "id"
    private List˂Entity3˃ entity3List;

    public List˂Entity3˃ getEntity3List() { return entity3List; }
    public void setEntity3List(List˂Entity3˃ _entity3List;) {
      this.entity3List = _entity3List;

    public void
    addEntity3(Entity3 _entity3) {
      getEntity3List()                   // ← WARN: get force load from DBs
        .add(_entity3);                  //         (vs entity3List)

    public void
    removeEntity3(Entity3 _entity3) {
      getEntity3List()                   // ← WARN: get force load from DBs
        .remove(_entity3);               //  (vs entity3List)

    @OneToMany(                          //º ---- entity4 ---- º
      fetch = FetchType.LAZY,
      cascade = CascadeType.ALL)         // ← ALL | PERSIST | ...?
    private List˂Entity4˃ entity4List;

    @Formula(value =                     //º ---- entity5 ---- º
          "(SELECT COUNT(1) "
        + "FROM entity5 en5 "
        +   "JOIN entity1 en1 ON = en5.entity1_id "
        + "WHERE = id AND en5.bCondition = 1)")
    private Long entity5List;

    @Transient                           // ← Do NOT persists.
    public Long
    getEntity5List() {
      return entity5List;

    @Transient                           // ← Do NOT persists.
    public void
    setEntity5List(Long entity5List) {
       this.entity5List = entity5List;

    publicºint hashCode()º{              // ← Override when needed to avoid  [troubleshooting]
      return Objects.hash( ...);         //   hash collisions

    @Override                            // ← Override when needed
    publicºboolean equals(Object obj)º{
      Entity1 that = (Entity1) obj;
      return Objects.equals(column1, that.column1)
          ⅋⅋ Objects.equals(column2, that.column2)
          ⅋⅋ ... ;

    @CreatedDate private Long createdDate;// JPA Audit: º*1º
    @Version      private int version;    // JPA Audit: º*1º

  º*1º: ToBºenable JPA Auditingº it must also be indicated in Spring config like:

   ºpackage com.myComp.jpa;º
    import org.springframework.context.annotationº.Configurationº;
   º@EnableJpaAuditingº         //  ← Dump create/update/delete JPA events  [debug][jpa]
    public class JpaConfig {}   //    for all persistent entities

    When database auditing is enabled, next (
    annotations can be used onºentity fieldsº to capture audit information.

      ==========          ================
    · @CreatedBy          entity creation
    · @CreatedDate        entity creation
    · @LastModifiedBy     persist operation
    · @LastModifiedDate   persist operation.
    · @Version            updated at every op.
                          starting by 0.

BºJWT (OAuth2)Support:º {{{                                    [aaa], [oauth], [cryptography]
  public class OAuth2Const {
      static final String
          HEADER_AUTH_KEY     = "Authorization",
          TOKEN_BEARER_PREFIX = "Bearer ",
          AUTHKEY             = "authkey",
          AUTHORIZATION       = "Authorization",
          HEADER              = "header",
          BEARER              = "Bearer ",
          LOGIN_URL           = "/api/v1/user/login";

      static final long
          MILISECS_TOKEN_EXPIRATION = 60*60*4*1000;

- com/myComp/security/


  import io.jsonwebtoken.Jwts;

  public class JWTAuthorizationFilter
  extends BasicAuthenticationFilter {                                      ← Implemented as Filter
       public JWTAuthorizationFilter(AuthenticationManager authManager) {

    protected void doFilterInternal(
        javax.servlet.http.HttpServletRequest req,
        javax.servlet.http.HttpServletResponse res,
        javax.servlet.FilterChain chain)
        throws IOException, javax.servlet.ServletException {
      String header = req.getHeader(HEADER_AUTH_KEY);
      if (header == null || !header.startsWith(TOKEN_BEARER_PREFIX)) {
        chain.doFilter(req, res);
      final UsernamePasswordAuthenticationToken
          authentication = _getAuth(req);
      chain.doFilter(req, res);

        _getAuth(javax.servlet.http.HttpServletRequest request) {
      final String token = request.getHeader(HEADER_AUTH_KEY);
      if (token == null) { return null; }                                 ← Do not throw to allow next Filters
      String user = Jwts.parser()
              .parseBase64Binary("32bytes/64hex dig.secret key")))
            .parseClaimsJws(token.replace(TOKEN_BEARER_PREFIX, ""))
      if (user == null) { return null; }                                  ← Do not throw to allow next Filters
      return new
           (user, null, new ArrayList˂˃());

- file: com/myComp/security/

  import org.springframework.beans.factory.annotation.Autowired;
  import org.springframework.context.annotation.Bean;
  import org.springframework.context.annotation.Configuration;
  import org.springframework.http.HttpMethod;
  import org.springframework.web.cors.CorsConfiguration;
  import org.springframework.web.cors.CorsConfigurationSource;
  import org.springframework.web.cors.UrlBasedCorsConfigurationSource;

  @Configuration                                 ← Mark as Spring Conf. class [configuration]
  public class WebSecurity
  extends WebSecurityConfigurerAdapter {         ← [AAA]

    @Autowired private
    UserDetailsService userDetailsService;

    public BCryptPasswordEncoder bCryptPasswordEncoder() {
      return new BCryptPasswordEncoder();

    protected void
      configure(HttpSecurity httpSecurity) throws Exception {
      if (true) {
          .antMatchers(HttpMethod.POST, LOGIN_URL).permitAll()
          .antMatchers(HttpMethod.POST, CREATE_URL).permitAll()
              new JWTAuthorizationFilter(
              ) );
      if (false) { // example conf. 2
              .antMatchers(HttpMethod.GET   ,"/api/v1/service1").permitAll()
              .antMatchers(HttpMethod.POST  ,"/api/v1/service1").hasRole("ADMIN")
              .antMatchers(HttpMethod.PUT   ,"/api/v1/service1").hasRole("ADMIN")


    public void
    configure(AuthenticationManagerBuilder auth) {
         .passwordEncoder(bCryptPasswordEncoder());                       ← Algorithm used for passwords

    CorsConfigurationSource corsConfigurationSource() {                   ← Cross-Origin Resource Sharing
      final UrlBasedCorsConfigurationSource                                 (CORS) SETUP
        source = new UrlBasedCorsConfigurationSource();
            "/**",                                                        ← Any source
             new CorsConfiguration()
                  .applyPermitDefaultValues() );
      return source;


BºAuthentication, Authorization and Access (AAA)º
- file: com/myComp/security/                               [aaa][oauth]
  public interface AAAService {
    public String getUserByToken(String token);
    public String createTokenForUsername(String userName);

    Boolean checkLoginOrThrow(String username, String password);
    AAAUserEntity findByUsernameOrThrow(String username);

- file: com/myComp/security/
  public class AAAServiceImpl implements AAAService {
    private Logger logger = LoggerFactory.getLogger(this.getClass());
    @Autowired private AAAUserRepository userRepository;

    private String someConfigParam;                               // ← Some config param injected by Spring [configuration]
    public String getUserByToken(String token) {                  // ← Used by different controllers to
      return Jwts.parser()                                        //    fetch user from Header token
                "32bytes/64hex dig.secret key")))
            token.replace(TOKEN_BEARER_PREFIX, ""))

    public String createTokenForUsername(String userName) {       // ← Used to create User session JWT token
      final SignatureAlgorithm                                    //   upon successful login
         signatureAlgorithm = SignatureAlgorithm.HS256;
      final byte[] apiKeySecretBytes =
           .parseBase64Binary("32bytes/64hex dig.secret key"));
      final Key signingKey = new SecretKeySpec(
             signatureAlgorithm.getJcaName() );
      return BEARER +
          Jwts.builder().setIssuedAt(new Date())
                 new Date( System.currentTimeMillis()
                         + MILISECS_TOKEN_EXPIRATION) )
              .signWith( signingKey,                              // ← Setup priv.key for JWT signatures

    public Boolean checkLoginOrThrow(String userName, String password) {
      // TODO:(0) send hash of user+pass?
      if (userRepository.findByLogin(userName, password) != 1) {
         throw new CustomSecurityException(...);

    public AAAUserEntity findByUsernameOrThrow(String username) {
      try {
        return userRepository.findByUsername(username);
      } catch (Exception e) {
        throw new CustomSecurityException(...);
- file: com/myComp/config/ [qa] [error_control]
    @RequestMapping(value = "/api/v1/AAA")
    @Api(tags = "aaa,auditing,...")
    public class AAAController {
      final Logger logger = LoggerFactory.getLogger(this.getClass());

      @Autowired private AAAService aaaService;

      @ApiOperation(value = "Login with an user")
      @PostMapping( value = "/login", produces = "application/json" )
      public ResponseEntity˂String /*(Token)*/˃
        login(@RequestBody UserPassDTO login) {
          aaaService.loginUser(login.getUsername(), login.getPassword());
        HttpStatus responseStatus = exists ? HttpStatus.OK"login success for {}", login.getUsername()) ;
        final String token = AAAService.createTokenForUsername(login.getUsername());          // [oauth] Create token upon successful login
        return new ResponseEntity˂˃(token, responseStatus);

      @ApiOperation(value = "Get User")
      @GetMapping(value = "/getDetail", produces = "application/json")
      public ResponseEntity˂AAAUserEntity˃
        getUserDetail(@RequestParam(value = "username") String username) {
        AAAUserEntity user = userService.findByUsernameOrThrow(username);
        return new ResponseEntity˂˃(user, HttpStatus.OK);
- file: com/myComp/                                      [configuration][devops]
BºMAIN (entry point to Spring Boot app)º

  package com.myComp;

  import org.springframework.boot.SpringApplication;
  import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
  import org.springframework.boot.autoconfigure.SpringBootApplication;
  import org.springframework.context.annotation.ComponentScan;
  import org.springframework.context.annotation.Configuration;

  @SpringBootApplication                     //  ← main class for SpringB. project.
                                             //    (must be present in base path)
                                             //    shortcut for
                                             //         @Configuration (mark class as a src of beans definitions)
                                             //       + @ComponentScan (enable component scanning to find+register
                                             //                         beans in the Spring context)
                                             //       + @EnableAutoConfiguration : once enable auto-configuration
                                             //            is triggered by on several conditions such as:
                                             //            - presence of certain classes in classpath
                                             //            - existence  of  specific  beans
                                             //            - value of  some properties.
                                             //            e.g: If project depends on Springspring-boot-starter-web
                                             //            Spring Boot will initialize an embedded Tomcat server
                                             //            with minimal configuration required.
  @ComponentScan({ "com.myComp"})            //  ← [configuration] Package to scan for Spring components
  public class App {                         //    (use along with Spring @Configuration and/or
    public static void main(String[] args) {        @SpringBootApplication), args);

- file: com/myComp/config/ [qa] [error_control]

    package com.myComp.config;


    import javax.validation.ConstraintViolation;
    import javax.validation.ConstraintViolationException;

    import org.hibernate.exception.JDBCConnectionException;
    import org.springframework.http.*;
    import org.springframework.validation.FieldError;
    import org.springframework.validation.ObjectError;
    import org.springframework.web.bind.MethodArgumentNotValidException;
    import org.springframework.web.bind.MissingServletRequestParameterException;
    import org.springframework.web.bind.annotation.ControllerAdvice;
    import org.springframework.web.bind.annotation.ExceptionHandler;
    import org.springframework.web.context.request.WebRequest;
    import org.springframework.web.method.annotation.MethodArgumentTypeMismatchException;
    import org.springframework.web.servlet.mvc.method.annotation.ResponseEntityExceptionHandler;

    // TODO:(qa) Review.
    @ControllerAdvice                                              // ← (Spring 3.2+): Bºhandle exceptions acrossº
    public class CustomControllerAdvice                            // Bºwhole applicationº (vs individual controller).
    extends ResponseEntityExceptionHandler {                       //   "Sort of" exception-interceptor thrown by
                                                                   //   methods annotated with @RequestMapping.

      @ExceptionHandler(xceptionClass01.class)                     //  ← Allows different exception handling by
      public ResponseEntity˂Object˃                                //    type (Recoverable, external, internal,
      connectionException(final JDBCConnectionException e) {       //    ...)
        // log, notifications, ...
        CustomClientErrorNotification customErr =
          new CustomClientErrorNotification(error_list,,..)
        return new ResponseEntity˂˃(
          HttpStatus.BAD_REQUEST );

      @Override protected ResponseEntity˂Object˃
        MethodArgumentNotValidException ex,
        HttpHeaders headers, HttpStatus status, WebRequest request) {
        final List˂String˃ error_list = new ArrayList˂˃();
        for (FieldError error : ex.getBindingResult().getFieldErrors()) {
          error_list.add(error.getField() + ": " + error.getDefaultMessage());
        CustomClientErrorNotification customErr =
          new CustomClientErrorNotification(error_list,,..)
        return new ResponseEntity˂˃(customErr,  HttpStatus.BAD_REQUEST);


      @Override protected ResponseEntity˂Object˃
      handleMissingServletRequestParameter(...) { ... }

      @Override protected ResponseEntity˂Object˃
      handleConstraintViolation(...) { ... }
- file: com/myComp/config/ [configuration]   // ← Main Config point.
    package com.myComp.config;                                     //   (autoscan is another possibility)

    import org.springframework.beans.factory.annotation.Value;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    import org.springframework.web.client.RestTemplate;            // ← Note: RestTemplate has been deprecated by WebClient
    import org.springframework.web.filter.CharacterEncodingFilter;            with Spring 5.0. The new one offers async
    import org.web3j.protocol.admin.Admin;                                    support, while RestTemplate does NOT. webClient
    import org.web3j.protocol.Web3j;                                          also has support for timeouts, retry with
                                                                              exponential backoff, ...
    public class ConfigurationCore {
      @Bean public Service1 getService1() { return new Service1Impl(); }
      @Bean public Service2 getService2() { return new Service2Impl(); }
      @Bean public Service3 getService3() { return new Service3Impl(); }

      @Bean public Entity1Service
      getEntity1Service() {  return new Entity1ServiceImpl(); }   // ← [persistence][JPA]
      @Bean public Entity2Service
      getEntity2Service() {  return new Entity2ServiceImpl(); }   // ← [persistence][JPA]

      CharacterEncodingFilter characterEncodingFilter() {
        final CharacterEncodingFilter filter =
            new CharacterEncodingFilter();
        return filter;

      @Bean AAAService getAAAService() {                      // ← [aaa]
        return new AAAServiceImpl();

- file: com/myComp/apirest/
    package com.myComp.api.serviceZ.controller;

    import javax.transaction.Transactional;                          // ← [persistence][jpa][erro_control][qa]
    import javax.validation.Valid;

    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.http.HttpStatus;
    import org.springframework.http.ResponseEntity;
    import org.springframework.web.bind.annotation.PostMapping;
    import org.springframework.web.bind.annotation.RequestBody;
    import org.springframework.web.bind.annotation.RequestMapping;
    import org.springframework.web.bind.annotation.RequestParam;
    import org.springframework.web.bind.annotation.RestController;

    import io.swagger.annotations.Api;
    import io.swagger.annotations.ApiOperation;
    import io.swagger.annotations.ApiParam;

    @RequestMapping(value = "/api/v1/service1")
    @Api(tags = "service1,topic1,topic2")
    public class ControllerService1 {

      final Logger logger = LoggerFactory.getLogger(this.getClass());

      @Autowired private AAAService AAAService;           // [aaa]

      @Autowired Service1 service1;
      @Autowired Service2 service2;
      @Autowired Service3 service3;

      @Autowired Entity1Service entity1Service;               // Entity1Service1 uses Entity1Repository for queries
                                                              // Entity1 por inserts/deletes/...

      @ApiOperation(value = "human readable api summary")
         value = "/search/entity1",                           // ← Final URL /api/v1/service1/search/entity1
         produces = "application/json")
      public ResponseEntity
        ˂List˂Entity1˃˃ searchEntity1(
          @RequestBody CustomSearchRequest request)
          throws IllegalAccessException {                     // ← [aaa]
        final List˂Entity1˃ response =
           service1.getEntity1ListQuery1(request);             // ← Note: Throw exception on service implementation
                                                                   if some error arise (vs returning null).
                                                                   Then configure CustomControllerAdvice
                                                                   to handle generic errors.
        return new ResponseEntity˂˃(response, HttpStatus.OK; );

      @Transactional                                             // ← [jpa] declaratively control TX boundaries on
                                                                 //   CDI managed beans and Java EE managed beans.
                                                                 //   (class or method level)
      @ApiOperation(value = "human readable api summary")
         value = "/entity1",
        produces = "application/json")
      public ResponseEntity˂˃ create(
         Entity1 jsonEntity1,
         @RequestParam(name = "param1", required = true) String param1,
         @RequestParam(name = "param2", required = true) String param2
      ) {
        final String username = AAAService.getUserByToken(token);  // [aaa]
        AAAService user = userService.findByUsernameOrThrow(username);
        return new ResponseEntity˂˃(HttpStatus.CREATED);

      @ApiOperation(value = "human readable api summary")
      @GetMapping(value = "/entity1/{entity1_id}")
      public ResponseEntity˂Entity1˃ getChartJson(
          @PathVariable int entity1_id,
          @RequestHeader (name="Authorization") String token,
          @RequestParam  (name = "startIndex", required = false) Long param1,
          @RequestParam  (name = "maxRows"   , required = false) Long param2)
      { ... }

- file: com/myComp/service/
    package com.myComp.api.serviceZ.service;

    import java.util.List;

    public interface Entity1Service {

      ˂List˂Entity1˃˃ getEntity1ListQuery1(CustomSearchRequest req);
      void            insert            (Entity1 entity);

- file: com/myComp/service/
    package com.myComp.api.serviceZ.service;

    import javax.persistence.*;
    import javax.persistence.criteria.*;
    import javax.transaction.Transactional;

    public class Entity1ServiceImpl implements Entity1Service {

      @Autowired Entity1Repository entity1Repository;                   // [persistence][jpa]
      @Autowired Entity2Repository entity2Repository;
      @Autowired EntityManager em;                                      // [persistence][jpa]

      @Transactional                                                    // [persistence][jpa]
      public void insert(Entity1 entity1) {
        em.persist(entity1);                                            // INSERT INTO ... [persistence][jpa]
        // anything else (persist/update related entities, ...)

      public ˂List˂Entity1˃˃ getEntity1ListQuery1(CustomSearchRequest req) {
        final ˂List˂Entity1˃˃ entity1_list =  entity1Repository.query1(req.entity3Id, req.col2Value);
        return response;

      public ˂List˂Entity1˃˃ getEntity1ListQuery2(CustomSearchRequest req) {
        final CriteriaBuilder                  cb = em.getCriteriaBuilder();            // [persistence][jpa] TODO:
        final CriteriaQuery˂Entity1˃ entity1_list = cb.createQuery(Entity1.class);      // [persistence][jpa] TODO:
        final Root˂Entity1˃               smRoots = smartContracts                      // [persistence][jpa] TODO:

        final Predicate  p1 = cb.equal( ... ), // [TODO]
                         p2 = cb.equal( ... );
        final Predicate all = cb.and(p1, p2);;
        TypedQuery˂Entity1˃ typedQuery = em.createQuery(entity1_list);

        return typedQuery.getResultList();

Spring Boot/Cloud Summary
Spring Cloud Configuration
  2+ APIs running independently have been developed.


  •ºConfiguration Serverº: centralizes micro-services configuration.
                           (Sort of "etcd" for Spring)
  •ºDiscovery Serverº    : allow apps to find each other
  •ºGateway Serverº      : reverse proxy encapsulating all decoupled
                           micro-services in a single port.

Bº# Configuration Server HOW-TO:#º
   maven/gradle dependencies:
   groupId                    artifactId  spring-cloud-config-server  spring-cloud-starter-eureka
   org.springframework.boot   spring-boot-starter-security

   @SpringBootApplication            ← Spring Boot Entry point
   @EnableConfigServer               ← Make Configuration Service discoverable
   @Enable                           ← via EurekaClient
   public class ConfigApplication {

  ºapplication.propertiesº:                       ← Standard Spring Boot config file
   server.port=8081              ┐  ←······│· set to real git path
       file:///${user.home}/application-config ├─ Spring Cloud
   eureka.client.region=default                │  Config.Server config
   eureka.client.serviceUrl.defaultZone=       ┘
          =http://discUser:discPassword@localhost:8082/eureka/               │
   security.user.password=configPassword       │
   security.user.role=SYSTEM                   ┘

Bº# Discovery Server HOW-TO: #
  groupId                    artifactId  spring-cloud-starter-eureka-server  spring-cloud-starter-config
  org.springframework.boot   artifactId˃spring-boot-starter-security

  @SpringBootApplication     ← Spring Boot Entry point
  @EnableEurekaServerpublic  ← "IoT Setup"
  class DiscoveryApplication {...}

  BºSecure Server endpoints anotattionsº:
  @Order(1)                          ←  There are two security configurations for the
  public class SecurityConfig           Discover Serv. endpoints + dashboard.
  extends WebSecurityConfigurerAdapter {
     public void configureGlobal(
         AuthenticationManagerBuilder auth)
     protected void configure(HttpSecurity http)

  public static class AdminSecurityConfig
  extends WebSecurityConfigurerAdapter {
    protected void configure(HttpSecurity http) {
      .antMatchers(HttpMethod.GET, "/")
      .antMatchers("/info", "/health")

  └ºbootstrap.propertiesº                              Must match Discovery serv. in               ← configuration repository.    ← URL of the confi. server

  └ºdiscovery.propertiesº                            ← Add also to application-config Git repo

Bº# Gateway Server HOW-TO: #
  groupId                    artifactId  spring-cloud-starter-config  spring-cloud-starter-eureka  spring-cloud-starter-zuul
  org.springframework.boot   spring-boot-starter-security

  @SpringBootApplication     ← Spring Boot Entry Point
  public class GatewayApplication {}

    public class SecurityConfig
    extends WebSecurityConfigurerAdapter
      public void configureGlobal
         (AuthenticationManagerBuilder auth)
      throws Exception {

      protected void configure(HttpSecurity http)
      throws Exception {


    └ºº (from app-config Git repo)
      eureka.client.region = default
      eureka.client.registryFetchIntervalSeconds = 5

    └º**º   ← route /boot-service HTTP requests
                                                             to our Book,Authorization

Bº# Common Dependencies. for Config Client, Eureka, JPA, Web an Security: #
  GROUP ID                         ARTIFACT ID        spring-cloud-starter-config        spring-cloud-starter-eureka
  org.springframework.boot         spring-boot-starter-data-jpa
  org.springframework.boot         spring-boot-starter-web
  org.springframework.boot         spring-boot-starter-security

Bº# (Sharing) Session Configuration: #
  └ dependencies to add to Discovery server, gateway server and micro-service 1/2/...
    GROUP ID                       ARTIFACT ID
    org.springframework.session    spring-session
    org.springframework.boot       spring-boot-starter-data-redis

  └ Add next IoT to DISCOVERY SERVER and REST APIs.
    public class SessionConfig
    extends AbstractHttpSessionApplicationInitializer {  }

    @EnableRedisHttpSession(redisFlushMode = RedisFlushMode.IMMEDIATE)
    public class SessionConfig
    extends AbstractHttpSessionApplicationInitializer {}

  └ For the GATEWAY SERVERºadd a simple filter to forward the session º
   ºso that authentication will propagate to another service after login:º

    public class SessionSavingZuulPreFilter
    extends ZuulFilter {
      private SessionRepository repository;

      public boolean shouldFilter() {
        return true;

      public Object run() {
        final RequestContext context   =
        final HttpSession httpSession  =
        final Session session          =

          "Cookie", "SESSION=" + httpSession.getId());
        return Rºnullº;

      public String filterType() {
        return "pre";
      public int filterOrder() {return 0;}

  private final String ROOT_URI = "http://localhost:8080";
  private FormAuthConfig formConfig
     = new FormAuthConfig("/login", "username", "password");

  public void setup() {
    RestAssured.config = config().redirect(

  public void whenGetAllBooks_thenSuccess() {
    Response response = RestAssured.get(ROOT_URI + "/book-service/books");
    Assert.assertEquals(HttpStatus.OK.value(), response.getStatusCode());

  // Try to access protected resource:
  public void protectedResourceMustRedirectToLogin() {
    Response response =
      RestAssured.get(ROOT_URI + "/book-service/books/1");
      HttpStatus.FOUND.value(), response.getStatusCode());
      "http://localhost:8080/login", response.getHeader("Location"));

  ┌────────→ @SpringBootApplication      HTTP Request                        ºCLOUDº
beans to      ┌──like─  ─┐
  │           │  these   │         ┌───────────────────┐ ask config       ┌───────────────┐
 ┌────────────┴──┐       v         │ @RestController   ───────────────────→ Configuration │
 │@Configuration │   ┌────────┐    │                   │ config.propert.  │    Server     │
 │               │   │@Service│    │                   │←──────────────── └───────────────┘
 │               │   └──────┬─┘    │  @Autowired       │
 │ @Bean         │          └──┬────→ Service service; │register itself as service
 │ public MyBean │   ┌─────────┴┐  │                   ├──────────────────→──────────┐
 │ providerBean()│   │@Component│  │                   │ask for service   │  Service │
 │               │   └──────────┘  │  @RequestMapping  ├─────────────────→  Discovery│
 └───────────────┘                 │  public Map       │←──────────────── └──────────┘
                                   │   serverRequest() │ URL response

º@EnableConfigServerº  turns app into a server that other apps can get
                       their configuration from.
                       Use in the
                       client @SpringBootApplication
                       to point to the config server.

º@EnableEurekaServerº  turns your app into an Eureka discovery service

º@EnableDiscoveryClientº makes your app register in the service discovery
                        server and discover other services through it.

º@EnableCircuitBreakerº- configures Hystrix circuit breaker protocols.
                         Note: Hystrix looks to be discontinued. Replaced by

º@HystrixCommand(fallbackMethod = “fallbackMethodName”)º
  marks methods to fall back to another method if they cannot succeed normally.
Spring: Non classified
Spring Batch 
  Quartz is a scheduling framework. Like "execute something every hour
  or every last friday of the month"

  Spring Batch is a framework that defines that "something" that will
  be executed. You can define a job, that consists of steps. Usually a
  step is something that consists of item reader, optional item
  processor and item writer, but you can define a custom stem. You can
  also tell Spring batch to commit on every 10 items and a lot of other
  stuff.  From Spring 2 , it can also schedule tasks
  (See also, Batch applications
    for the Java Platform)
Reactive (5.0+)
- Note: Servlet 3.1+ API for non-blocking I/O leads away from
  the rest of the Servlet API where contracts are synchronous
  (Filter, Servlet) or blocking (getParameter, getPart).
- fully non-blocking, handling concurrency with a small number of threads
- supports Reactive Streams non-blocking back pressure:
  In synch/imperative code, blocking calls serve as a natural form
  of back pressure that forces the caller to wait.
  In non-blocking code it becomes important to control the rate
   of events so that a fast producer does not overwhelm its destination.
  Spring Reactive Streams is a small spec, also adopted in Java 9,
  that defines the interaction between asynchronous components
  with back pressure. Ex: a data repository (Publisher),
  produces data that an HTTP server (Subscriber), can then "forward"
  to the response. Main purpose of Reactive Streams is to allow
  the subscriber to control how fast or how slow the publisher
  will produce data.
  If a publisher can’t slow down then it has to decide whether
  to buffer, drop, or fail.
- As a general rule WebFlux APIs accept a plain Publisher as input,
  adapt it to Reactor types internally, use those, and then return
  either Flux or Mono as output.
- runs on Netty, Undertow, Servlet 3.1+ containers
- TODO: WebClient
- TODO: WebTestClient
- TODO: WebSocket
- The spring-web module contains the reactive building block:  
  HTTP abstractions, Reactive Streams server adapters, reactive codecs,
  and a core Web API.
- public spring-web APIs Server support is organized in two layers:
  - HttpHandler and server adapters : the most basic, common API for HTTP
    request handling with Reactive Streams back pressure running on different
  - WebHandler API : slightly higher level but still general purpose server
    web API with exception handlers (WebExceptionHandler), filters (WebFilter),
    and a target handler (WebHandler)
    All components work on ServerWebExchange — a container for the HTTP
    request and response that also adds request attributes, session attributes,
    access to form data, multipart data, and more.
- Codecs: The spring-web module provides
  HttpMessageReader(DecoderHttpMessageReader) and
  HttpMessageWriter(EncoderHttpMessageWriter) for encoding and decoding the
  HTTP request and response body with Reactive Streams.
  Basic Encoder and Decoder implementations exist in spring-core but
  spring-web adds more for JSON, XML, and other formats.

- central controller
- discovers delegate components from Spring configuration
  If declared with the bean name "webHandler" it is in turn
  discovered by WebHttpHandlerBuilder which puts together a
  request processing chain as described in WebHandler API
- typical WebFlux application Spring configuration:
  - DispatcherHandler named "webHandler"
  - WebFilters
  - WebExceptionHandlers
  - DispatcherHandler special beans
  - Others
- The configuration is given to WebHttpHandlerBuilder to
  build the processing chain:
 (The resulting HttpHandler is ready for use with a server adapter)
  ApplicationContext context = ...
  HttpHandler handler = WebHttpHandlerBuilder.
- "special beans":  Spring-managed instances implementing one of the contracts listed:

  Bean type            | Explanation
  HandlerMapping       | Map a request to a handler.
                       | mapping is based on some criteria
                       | the details of which vary by
                       | HandlerMapping implementation 
                       | (annotated controllers,
                       | simple URL pattern mappings,...)
  HandlerAdapter       | Helps the DispatcherHandler to
                       | invoke a handler mapped to a
                       | request regardless of how the
                       | handler is actually invoked.
                       | For example invoking an annotated
                       | controller requires resolving
                       | various annotations. The main
                       | purpose of a HandlerAdapter
                       | is to shield the DispatcherHandler
                       | from such details.
  HandlerResultHandler | Process the HandlerResult returned
                       | from a HandlerAdapter

- request flow:
  for map in HandlerMapping_list:
    //  (continue is map doesn't match request)
    handler = first handler in map matching request
    HandlerResult res = handler()

  1) Each HandlerMapping is asked to find a
     matching handler and the first match is used
  2) If a handler is found, it is executed through
     an appropriate HandlerAdapter which exposes
     the return value from the execution as
  3) The HandlerResult is given to an appropriate
     HandlerResultHandler to complete processing
     by writing to the response directly or using
     a view to render.

BºProcessing Chainº
- The processing chain can be put together with WebHttpHandlerBuilder which builds an
HttpHandler that in turn can be run with a server adapter.
To use the builder either add components individually or point to an ApplicationContext
to have the following detected:

 │Bean name             │Bean type            │Count│ Description
 │webHandler            │WebHandler           │1    │ Target handler after filters
 │"any"                 │WebFilter            │0..N │ Filters
 │"any"                 │WebExceptionHandler  │0..N │ Exception handlers after filter chain
 │webSessionManager     │WebSessionManager    │0..1 │ Custom session manager
 │                      │                     │     │ DefaultWebSessionManager by default
 │serverCodecConfigurer │ServerCodecConfigurer│0..1 │ Custom form and multipart data decoders
 │                      │                     │     │ ServerCodecConfigurer.create() by default
 │localeContextResolver │LocaleContextResolver│0..1 │ Custom resolver for LocaleContext;
 │                      │                     │     │ AcceptHeaderLocaleContextResolver by default

BºRequired dependenciesº
Server name     │  Group id              │ Artifact name      │  Code snippet
Reactor Netty   │ io.projectreactor.ipc  │ reactor-netty      │ HttpHandler handler = ...
                │                        │                    │ ReactorHttpHandlerAdapter adapter =
                │                        │                    │     new ReactorHttpHandlerAdapter(handler);
                │                        │                    │ HttpServer.create(host, port).
                │                        │                    │     newHandler(adapter).block();
Undertow        │ io.undertow            │ undertow-core      │ HttpHandler handler = ...
                │                        │                    │ UndertowHttpHandlerAdapter adapter =
                │                        │                    │      new UndertowHttpHandlerAdapter(handler);
                │                        │                    │ Undertow server = Undertow.builder().
                │                        │                    │      addHttpListener(port, host).
                │                        │                    │      setHandler(adapter).build();
                │                        │                    │ server.start();
Tomcat          │ org.apache.tomcat.embe │ omcat-embed-core   │ HttpHandler handler = ...
                │                        │                    │ Servlet servlet = new
                │                        │                    │     TomcatHttpHandlerAdapter(handler);
                │                        │                    │
                │                        │                    │ Tomcat server = new Tomcat();
                │                        │                    │ File base = new File(
                │                        │                    │    System.getProperty(""));
                │                        │                    │ Context rootContext = server.
                │                        │                    │    addContext("", base.getAbsolutePath());
                │                        │                    │ Tomcat.addServlet(rootContext, "main", servlet);
                │                        │                    │ rootContext.addServletMappingDecoded("/", "main");
                │                        │                    │ server.setHost(host);
                │                        │                    │ server.setPort(port);
                │                        │                    │ server.start();
Jetty           │ org.eclipse.jetty      │ etty-server        │ HttpHandler handler = ...
                │                        │ etty-servlet       │ Servlet servlet =
                │                        │                    │     new JettyHttpHandlerAdapter(handler);
                │                        │                    │
                │                        │                    │ Server server = new Server();
                │                        │                    │ ServletContextHandler contextHandler =
                │                        │                    │     new ServletContextHandler(server, "");
                │                        │                    │ contextHandler.addServlet(
                │                        │                    │     new ServletHolder(servlet), "/");
                │                        │                    │ contextHandler.start();
                │                        │                    │
                │                        │                    │ ServerConnector connector =
                │                        │                    │     new ServerConnector(server);
                │                        │                    │ connector.setHost(host);
                │                        │                    │ connector.setPort(port);
                │                        │                    │ server.addConnector(connector);
                │                        │                    │ server.start();

STOMP: WebSockets
• JHipster is a development platform to generate, develop and deploy
  Spring Boot + Angular / React / Vue Web applications and Spring
• Created by Julien Dubois, currently (2021-06) Java Developer Advocacy
  manager at Microsoft
Async/Reactive Programming
Labmdas Intro
• Java 8+.
• simplifies development of non-blocking style APIs
  (low-level CompletableFuture or higher level ReactiveX).
• Context:
  The conventional computing model of a Touring machine takes for granted
  that data is already available to be processed.
  In the Internet era, data is arriving at random and we don't want to block
  our CPU in an infinite loop waiting for such data to arrive.

  The conventional approach is to let the OS scheduler to divide the CPU
  into threads or processes sharing the hardware at defined intervals.
  While this approach works well for standard load scenarios, it fails for
  "moderm" workloads with thousands or tens of thousands of simultaneous
  clients accessing the server. Each new OS thread requires some extra
  memory on the OS kernel (about 2Kilobytes per Thread, and even more
  per Process). Switching from thread to thread or process to process becomes
  expensive or prohibitive with that number of concurent I/O flows.
  This is even worse when our server is virtualized with many other
  competing VMs running on the same physical server.

  Async programming will try to reause the same thread by many different
  clients or flows of I/O data providing a much better ussage of hardware
  resources and avoiding unnecesary context-switches between threads or

  The term "reactive" refers to programming models that are built around
  reacting to change — network component reacting to I/O events, UI controller
  reacting to mouse events, etc. In that sense non-blocking is reactive because
  instead of being blocked we are now in the mode of reacting to notifications
  as operations complete or data becomes available.

  Spring Reactive Streams is a small spec, also adopted in Java 9, that defines
  the interaction between asynchronous components with back pressure. For
  example a data repository — acting as Publisher, can produce data that an
  HTTP server — acting as Subscriber, can then write to the response. The main
  purpose of Reactive Streams is to allow the subscriber to control how fast or
  how slow the publisher will produce data.

  Reactive Streams is of interest to low-level reusable libraries but
  no final applications are better suites using a higher level and richer
  (functional) API like Java8+ Collection-Stream API or more ingeneral APIs
  like those provided by ReactiveRX.

  Reactive programming can also be compared with the way data flows in Unix
  pipelines when handling text files. In the next Unix command there is a
  file input (it can be a real file in the hard-disk or a socket receiving
  data) and the different commands in the pipe consume STDIN and result to
  STDOUT for further processing.
  $ cat input.csv | grep "...." | sort | uniq | ... ˃  output.csv
  Reactive Java frameworks are ussually much fasters since everything executes
  on the same process (a Unix pipeline requires the help of the underlying
  OS to work), and the type of input/output data can be any sort of Java
  object (not just file text).
- JDK 1.9+
- Reactive Streams was adopted by the JDK in the form of the java.util.concurrent.Flow API.
- It allows two different libraries that support asynchronous streaming to connect to each other,
    with well specified semantics about how each should behave, so that backpressure, completion, cancellation
    and error handling is predictably propagated between the two libraries.
- There is a rich ecosystem of open source libraries that support Reactive Streams,
    and since its inclusion in JDK9, there are a few in development implementations that are
    targetting the JDK, including the incubating JDK9 HTTP Client,
    and the Asynchronous Database Adapter (ADBA)
    effort that have also adopted it
- (See also What can Reactive Streams offer to EE4J)

ReactiveX provides a set of very-well thought-out cross-language abstraction to
implement the reactive patterns.

                                    unsubscribe  : - Observable can opt to stop
                                                   event-emission if no more clients
                                                   are subcribed
rLoop-of-Observable-emitted-events:º               - unsubscription will cascade back
 ·  Observable → ˂˂IObserver˃˃: onNext(event)      through the chain of operators
 ·                                                 applying to associated Observable.
 ·  observer   →   observer*1 : handle event    *1 also called "subscriber", "watcher"
 ·                                                 or "reactor" ("reactor pattern")
 Observable    → ˂˂handler˃˃  : onCompleted()
 observer      →  observer    : handle event

 RºWARN:º There is no canonical naming standard in RXJava

 ºObservable˂T˃º → operator1 → ... → operatorN  → ºObserverº
Oºpushesºobjects       ^        ^      ^           Subscribes to
 (events) from         ·        ·      ·           the observable
 any source            ·        ·      ·           events
 (ddbb, csv,...)       ·        ·      ·           .onNext()
                       ·        ·      ·           .onCompleted()
        BºTIPº:Learning which operators to use     .onError()
               for a situation and how to combine  .onSubscribe(Disposable d);
               them is the key to master RxJava    ^
├─onSubscribe(Disposible): Free any resources created by the rxJava pipeline.
├─onNext()     : passes each item, one at a time,   to the ºObserverº
├─onCompleted(): communicates a completion event    to the ºObserverº
└─onError()    : communicates an error up the chain to the ºObserverº
                 where the Observer typically defines how to handle it.
                 Unless retry() operator is used to intercept the error,
                 the Observable chain typically terminates, and no
                 more emissions will occur.
                 See also Gºoperators 'Catch' and 'Retry'º

By default, Observables execute work on the immediate thread,
which is the thread that declared the Observer and subscribed it.
Not all Observables will fire on the immediate thread, (Observable.interval(),...)

ºCreating a source Observable:º
  Observable˂String˃ source01 = Observableº.justº("value1",...,"valueN");
  Observable˂String˃ source02 = Observableº.fromCallableº(() -˃ 1/0);
                                            ^^^^^^^^^^^^        ^^^
                                            Similar to .just() but errors
                                            are captured by the rxJava "pipeline"

  Observable˂String˃ source02 = Observableº.createº( emitter -˃
        try {
    emitterº.onComplete()º; // ← Optional
        } catch(Throwable e) {
  } );
  Observable˂String˃  source03 = Observableº.fromIterableº(myIterableList);
  Observable˂Integer˃ source04 = Observableº.rangeº(1,10);
  Observable˂String˃  source05 = Observableº.intervalº(1, TimeUnit.SECONDS);
                                             Since it operates on a timer →
                                             needs to run on separate thread
                                             and will run on the computation
                                             Scheduler by default

  Observable˂String˃  source06 = Observableº.fromFutureº(myFutureValue);
  Observable˂String˃  source07 = Observableº.emptyº();
                                             calls onComplete() and ends

  Observable˂String˃  source08 = Observableº.deferº( () -˃ Observable.range(start,count));
                                      Advanced factory pattern.
                                      allows a separate state for each observer
  Observable˂String˃  source09 = Observableº.fromCallableº( () -˃ Observable.range(start,count));

ºcreating Single/Maybe/Completable "Utility" Observables:º
 │ Single.just("Hello")        │ Maybe.just("Hello")            │ Completable.fromRunnable(       │
 │ .subscribe(...);            │   .subscribe(...);             │   () -˃ runProcess() )          │
 │                             │                                │ .subscribe(...);                │
 │ Emits a single item         │ Emits (or not)a single item    │  does not receive any emissions │
 │ºSingleObserverº             │ ºMaybeObserverº                │ ºCompletableObserverº           │
 │ .onSubscribe(Disposable d); │  .onSubscribe(Disposable d);   │  .onSubscribe(Disposable d);    │
 │ .onSuccess(T value);        │  .onSuccess(T value);          │  .onComplete();                 │
 │ .onSuccess(Throwable error);│  .onSuccess(Throwable error);  │  .onError(Throwable error);     │
 │                             │  .onComplete();                │                                 │

ºCreate Test-oriented observablesº

ºDerive Observables from source:º
Observable˂Integer˃ lengths  =  sourceº.mapº   (String::length);
Observable˂Integer˃ filtered = lengthsº.filterº(i -˃ i  = 5);

ºcreating an Observer:º
(Lambdas in the source Observable .subscribe can be used in place)
Observer˂Integer˃ myObserver = new Observer˂Integer˃() {
  @Override public void onSubscribe(Disposable d) { //... }
  @Override public void onNext(Integer value)     { log.debug("RECEIVED: " + value); }
  @Override public void onError(Throwable e)      { e.printStackTrace();}
  @Override public void onComplete()              { log.debu("Done!"); }

BºCold/Hot Observablesº
  └ Cold: - Repeat the same content to different observers.
          - Represent sort-of inmmutable data.
          - A "cold" Observable waits until an observer subscribes to it
            an observer is guaranteed to see the whole sequence of events
  └ Hot : "Broadcast" to all observers at the same time.
          - A "hot" Observable may begin emitting items as soon as it is created.
          - An observer connecting "later" will loose old emissions.
          - Representºreal-time eventsº. They are time-sensitive.
          - Emissions will start when first observers calls connect().
          - a cold/hot observable can generate a new hot observable by
            calling publish() that will return a hot ConnectableObservable.
            Helpful to avoid the replay of data on each subscrived Observer.

          - NOTE: A "Connectable" Observable: does NOT begin emitting items
              until its Connect method is called, whether or not any observers
              have subscribed to it.

BºAbout Nullº
 └ In RxJava 2.0, Observables ☞GºNO LONGER SUPPORT EMITTING null VALUESº☜ !!!

BºDecision Treeº (Choosing the right Operator for a task)
  (REF: @[])
  └ Alphabetical List of Observable Operators

BºCore APIº
  └ºrx.Observableº"==" [Java 8 Stream + CompletableFuture + "Back-presure" measures ]
    @[]    └──────────────┬──────┘
                                                            probably an intermediate
                                                            buffer for incomming/outgoing
                                                            messages that acts async.
                                                            when not full, and sync when
    - ºrx.Singleº: specialized version emiting a single item

    - Compose Observables in a chain
    - gives the real "reactive" power
    - operators allow to transform, combine, manipulate, and work
      with the sequences of items emitted by Observables.
    - declarative programming
      - Most operators operate on an Observable and return an Observable.
        Each operator in the chain modifies the Observable that results
        from the operation of the previous operator. Order matters.
        (the Builder Pattern, also supported, is non-ordered)
    sort of "bridge or proxy" is available in some implementations
    that acts both as an observer and as an Observable.
    - Needed when using multithreading into the
      cascade of Observable operators.
    - By default, the chain of Observables/operators
      will notify its observers ºon the same threadº
     ºon which its Subscribe method is calledº
    Operator|SubscribeOn         |ObserveOn
            |sets an Scheduler on|sets an Scheduler used
            |which the Observable|by the Observable to
            |should operate.     |send notifications to
            |                    |its observers.
    Scheduler "==" Thread

External links: - Rx Workshop: Introduction @[] - Introduction to Rx: IObservable @[] - Mastering observables (from the Couchbase Server documentation) @[] - 2 minute introduction to Rx by Andre Staltz (“Think of an Observable as an asynchronous immutable array.”) @[] - Introducing the Observable by Jafar Husain (JavaScript Video Tutorial) @[] - Observable object (RxJS) by Dennis Stoyanov @[] - Turning a callback into an Rx Observable by @afterecho @[]
Ops.classification 1 ºOperators creating new Observablesº @[] ºCreate º create Observable from scratch programmatically ºDefer º do not create the Observable until the observer subscribes, and create a fresh Observable for each observer ºEmpty º create Observables that have very precise and limited behavior ºNever º " ºThrow º " ºFrom º create some object or data structure ºInterval º create Observable that emits a sequence of integers spaced by a particular time interval ºJust º convert an object or a set of objects into an Observable that emits that or those objects ºRange º create an Observable that emits a range of sequential integers ºRepeat º create an Observable that emits a particular item or sequence of items repeatedly ºStart º create an Observable that emits the return value of a function ºTimer º create an Observable that emits a single item after a given delay ºOperators Transforming Items:º @[] ºBuffer º periodically gather items from input into bundles and emit these bundles rather than emitting the items one at a time ºFlatMap º transform the items emitted by an Observable into Observables, then flatten the emissions from those into a single Observable ºGroupBy º divide an Observable into a set of Observables that each emit a different group of items from the original Observable, Gºorganized by keyº ºMap º transform each input-item by applying a function ºScan º apply a function to each item emitted by an Observable, sequentially, and emit each successive value ºWindow º periodically subdivide items from an Observable into Observable windows and emit these windows rather than emitting the items one at a time ºOperators selectively filtering emitted events from a source Observableº @[p] ºDebounce º only emit an item from an Observable if a particular timespan has passed without it emitting another item ºDistinct º suppress duplicate items emitted by an Observable ºElementAtº emit only item n emitted by an Observable ºFilter º emit only those items from an Observable that pass a predicate test ºFirst º emit only the first item, or the first item that meets a condition, from an Observable ºIgnoreElements º do not emit any items from an Observable but mirror its termination notification ºLast º emit only the last item emitted by an Observable ºSample º emit the most recent item emitted by an Observable within periodic time intervals ºSkip º suppress the first n items emitted by an Observable ºSkipLast º suppress the last n items emitted by an Observable ºTake º emit only the first n items emitted by an Observable ºTakeLast º emit only the last n items emitted by an Observable ºOperators Combining multiple source Observables into a new single Observableº @[] ºAnd º combine sets of items emitted by two or more Observables by means ºThen º of Pattern and Plan intermediaries ºWhen º ºCombineLatest º when an item is emitted by either of two Observables, combine the latest item emitted by each Observable via a specified function and emit items based on the results of this function ºJoin º combine items emitted by two Observables whenever an item from one Observable is emitted during a time window defined according to an item emitted by the other Observable ºMerge º combine multiple Observables into one by merging their emissions ºStartWithº emit a specified sequence of items before beginning to emit the items from the source Observable ºSwitch º convert an Observable that emits Observables into a single Observable that emits the items emitted by the most-recently- emitted of those Observables ºZip º combine multiple Observables emissions together via a function function → emit single items for each input tuple ºOperators handling Errors and helping to recover from error-notificationsº @[] ºCatch º recover from an onError notification by continuing the sequence without error ºRetry º if a source Observable sends an onError notification, resubscribe to it in the hopes that it will complete without error ºUtility Operators "toolbox"º @[] ºDelay º shift the emissions from an Observable forward in time by a particular amount ºDo º register an action to take upon a variety of Observable lifecycle events ºMaterialize º represent both the items emitted and the notifications sent ºDematerializeº as emitted items, or reverse this process ºObserveOn º specify the scheduler on which an observer will observe this Observable ºSerialize º force an Observable to make serialized calls and to be well-behaved ºSubscribe º operate upon the emissions and notifications from an Observable ºSubscribeOn º specify the scheduler an Observable should use when it is subscribed to ºTimeIntervalº convert an Observable that emits items into one that emits indications of the amount of time elapsed between those emissions ºTimeout º mirror the source Observable, but issue an error notification if a particular period of time elapses without any emitted items ºTimestamp º attach a timestamp to each item emitted by an Observable ºUsing º create a disposable resource that has the same lifespan as the Observable ºConditional and Boolean Operators evaluating one or moreº Observables or items emitted by Observables @[] ºAll º determine whether all items emitted by an Observable meet some criteria Mathematical and Aggregate Operators @ ºAmb º given two or more source Observables, emit all of the items from only the first of these Observables to emit an item @[] ºContains º determine whether an Observable emits a particular item or not Average, Concat, Count, Max, Min, Reduce, and Sum C ºDefaultIfEmptyº emit items from the source Observable, or a default item if the source Observable emits nothing onverting Observables @ ºSequenceEqual º determine whether two Observables emit the same sequence of items ºSkipUntil º discard items emitted by an Observable until a second Observable emits an item To C ºSkipWhile º discard items emitted by an Observable until a specified condition becomes false ºConnectable Observable Operatorsº [] @ ºTakeUntil º discard items emitted by an Observable after a second Observable emits an item or terminates ºTakeWhile º discard items emitted by an Observable after a specified condition becomes false ºConnect º ºPublish º ºRefCount º ºReplay º ºMathematical and Aggregate Operatorsº - Operators that operate on the entire sequence of items emitted by an Observable ºAverage º calculates the average of numbers emitted by an Observable and emits this average ºConcat º emit the emissions from two or more Observables without interleaving them ºCount º count the number of items emitted by the source Observable and emit only this value ºMax º determine, and emit, the maximum-valued item emitted by an Observable ºMin º determine, and emit, the minimum-valued item emitted by an Observable ºReduce º apply a function to each item emitted by an Observable, sequentially, and emit the final value ºSum º calculate the sum of numbers emitted by an Observable and emit this sum ºBackpressure Operatorsº a variety of operators that enforce particular flow-control policies @[] - backpressure operators ºstrategiesº for coping with Observables that produce items more rapidly than their observers consume them ºConnectable Observable Operatorsº Specialty Observables that have more precisely-controlled subscription dynamics ºConnect º instruct a connectable Observable to begin emitting items to its subscribers ºPublish º convert an ordinary Observable into a connectable Observable ºRefCountº make a Connectable Observable behave like an ordinary Observable ºReplay º ensure that all observers see the same sequence of emitted items, even if they subscribe after the Observable has begun emitting items ºOperators to Convert Observablesº ºToº convert an Observable into another object or data structure
Ops.classification 2 - Basic Operators: - Suppressing operators: - filter, take, skip, takeWhile/skipWhile, distinct, distinctUntilChanged - Transforming operators: - map, cast, startWith, defaultIfEmpty, switchIfEmpty, sorted, deplay,repeat, scan - Reducing operators: - count, reduce, all, any, contains - Collection operators: - toList, toSortedList, toMap, toMultiMap, collect - Error recovery Operators - onErrorReturn, onErrorReturnItem, onErrorResumeNext, retry - Action ("stream life-cicle") Operators: - doOnNext, doOnComplete, doOnError, doOnSubscribe, doOnDispose - Combining Observables: - Merging: - merge, mergeWith - flatMap - Concatenation: - concat, concatWith - concatMap - Ambiguous: - amb - Zipping - Combine Latest: - withLatestFrom - Grouping: - groupBy - Multicasting, Replaying and Caching: ( Multicasting is helpful in preventing redundant work being done by multiple Observersand instead makes all Observers subscribe to a single stream, at least to the point wherethey have operations in common) - "Hot" operators. (TODO) - Automatic connection: - autoConnect, refCount, share - replay - cache -Subjects - Just like mutable variables are necessary at times even though you should strive forimmutability, Subjects are sometimes a necessary tool to reconcile imperative paradigmswith reactive ones. - PublishSubject - Serializin Subject - BehaviourSubject - ReplaySubject - AsyncSubject - UnicastSubject Custom Ops @[]
5 Not So Obvious Things About RxJava

- Error control [qa]

- Dealing with RxJava's never-ending Observables [troubleshooting]
Awaitility(Async→Sync) Tests
- Awaitility: DSL allowing to express async results (test expectations) easely.
  removing complexity of handling threads, timeouts, concurrency issues, ...
  that obscured test code.

- Ex 1:
  public void updatesCustomerStatus() {
    // Publish a (async) message to a message broker:
  Bºawait().atMost(5, SECONDS).until(customerStatusIsUpdated())º;
(Forcibly incomplete but still quite pertinent list of core developers and companies)
Tim Fox     :  Initiated VertX in 2012
Julien Viet :  Project lead (as of 2020), RedHat, Marseille
               He is also core developer of Crash @[]

Julien Ponge:@[]
               Author of VertX in Action
Many others :@[]
Vert.X Summary
- Vert.X guide for java devs @[]
- VertX maven starter        @[]
- Examples for amqp-bridge,  @[]
  grpc, core, docker,
 ºgradle/mavenº, ignite, jca,
  jdbc.  kafka, kotlin, mail,
  metrics, mqtt, openshift3,
   redis, resteasy, rx,
  service-proxy, shell, spring,
  sync, unit, web/web-client ...
- Webºserver examplesº       @[]
 ºangular,ºauth, authjdbc,
  blockinghandler, chat,
  cookie, cors,
  custom_authorisation, form,
  helloworld, http2, , jwt,
  mongo, react, realtime, rest,
  sessions, staticsite,
  templating, upload, vertxbus
- Web/ºJDBCºserver examples @[]

REF: @[]
-ºreusable unitº of Bºdeploymentº
                  - Can be passed some Gºconfigurationº like
                    credentials, network address,...
                  - can be deployed several times
                  - A verticle can deploy other verticles.

Oº|verticle|º1 ←─→ 1|event─loop|1 ←──────→   1|Thread|
  └────┬───┘        └────┬─────┘              └──┬───┘
       ^            "input" event like        Must not handle I/O thread-blocking
       │            network buffers,          or CPU intensive operations
                    timing events,            'executeBlocking' can be used
       │            verticles messages, ...   to offload the blocking I/O operations
˂˂io.vertx.core.                              from the event loop to a worker thread
  AbstractVerticle˃˃ Base Class
  ════════════════ @[]
-Oº.start()º ← life-cycle sync/async method to be overrided
-Oº.stop ()º ← life-cycle sync/async method to be overrided
-Oº.vertxº   ← - Points to the BºVert.x environment where the verticle is deployedº
             · - provides methods to create HTTP/TCP/UDP/... servers/clients. Ex:
             ·   · io.vertx.core.http.HttpServer server  = thisOº.vertxº.createHttpServer();
             · - provides access to the event bus.
             ·   Ex:
             ·    ºSENDING VERTICLEº                        │ºRECEIVING VERTICLEº
             ·   ┌──────────────────────────────────────────┼────────────────────────────────────────
             ·   │ ...                                      │ ...
             ·   │Oºvertxº.eventBus()                       │ public void onMessage(
             ·   │   .request(wikiDbQueue,                  │          Message message)
             ·   │       jsonObject, options ,              │ {
             ·   │       └────┬───┘  └──┬──┘                │   String action = message.
             ·   │            ·      headers                │                headers().get("action");
             ·   │            ·    + payload codecs         │
             ·   │            ·    + tiemouts               │   switch (action) {
             ·   │            ·      └──┬──┘                │     case "action1":
             ·   │// Ussually jsonObject contains the data  │       ...
             ·   │// and an "action" header the action to   │       message.reply(
             ·   │// be executed by the receiving verticle  │           new JsonObject()
             ·   │       reply -˃ {                         │           .put("key1", value1));
             ·   │     if (reply.succeeded()) {             │       break;
             ·   │       ...                                │     case ...:
             ·   │     } else {                             │       ...
             ·   │       ...                                │     default:
             ·   │     }                                    │
             ·   │   });                                    │         ErrorCodes.BAD_ACTION.ordinal(),
             ·   │                                          │         "Bad action: " + action);
             ·   │                                          │   }
             ·   │                                          │ }

-Oº.config()º← - accessors to some deployment configuration to allow passing Gºexternal configurationº
                 │ public static final String CONFIG_WIKIDB_QUEUE = "wikidb.queue";
                 │ ...
                 │ wikiDbQueue =Oºconfig()º.getString(CONFIG_WIKIDB_QUEUE, "wikidb.queue");
                 │                             ^^^^^^                      ^^^^^^^^^^^^^^
                                       or Integers, booleans               Default param
                                       complex JSON data, ...              if first is null

import io.vertx.core.AbstractVerticle;
public class MainVerticle extends AbstractVerticle {

  public void start(Future˂Void˃ startFuture) {
                    No params for the sync version

Event Bus - main tool for communication between verticles using ºmessagesº and one of: - point-to-point messaging - request-response messaging - publish / subscribe for broadcasting messages |verticle 01| |verticle 02| (HTTP server) event─bus (DDBB client) │ ║ ║ │ ├─── user 1234 ? ─────────→║ ║ │ │ ║ ║─────── user 1234 ? ──→ │ │ ║ ║ ├── ....─→ │ ║ ║ │←─ .... │ ║ ║←────── user 1234 ? ────┤ │←── user 1234 ──────────║ ║ │ ^^^^^^^^^ ║ ║ Message are free-form strings. (JSON recomended ^ for multi─language support │ │ - It can be accessed through (simple) TCP protocol for 3rd party apps or exposed over general-purpose messaging bridges (AMQP, Stomp,...) - Support cluster support sending messages to verticles deployed⅋running in different application nodes - a OºSockJSº bridge allows web applications to seamlessly communicate over the event bus from JavaScript running in the browser by receiving and publishing messages just like any verticle would do. threading conf By default Vert.x attaches CPU-core-thread 1˂--˃2 event loops VertX Threading Strategies: Incoming network data -˃ accepting thread"N": accepting thread "N" -˃ event-loop thread: +event with data When a verticle opens a network server and is deployed more than once, then the events are being distributed to the verticle instances in a round-robin fashion which is very useful for maximizing CPU usage with lots of concurrent networked requests.
VertX 4.0 What's New

└ Ex.1:
  @RunWith(VertxUnitRunner.class)  ← annotation to JUnit tests to allow vertx-unit features
  public class SampleHttpServerTest {

    private BºVertx vertxº;

    @Before public void prepare() { Bºvertxº= Vertx.vertx(); }

    @After public void finish(TestContext Oºcontextº) {

    public void start_http_server
           (TestContext Oºcontextº) {
                // provided by the runner
                // provides access to basic assertions,
                // a context to store data,
                // and several async-oriented helpers

      Async Qºasyncº = Oºcontextº.async();

           req -˃ req.response().putHeader("Content-Type", "text/plain").end("Ok")
          server -˃ {
            WebClient webClient = WebClient.create(vertx);
            webClient.get(8080, "localhost", "/").send(ar -˃ {
              if (ar.succeeded()) {
                HttpResponse˂Buffer˃ response = ar.result();
              Oºcontextº.assertEquals("text/plain", response.getHeader("Content-Type"));
              Oºcontextº.assertEquals("Ok", response.body().toString());
              } else {

└ Ex.2: check/test that a timer task has been called once,
        and that a periodic task has been called 3 times.

  public class WikiDatabaseVerticleTest {
    private Vertx vertx;
    @Before public void prepare(TestContext context) { vertx = ...       }
    @After  public void finish(TestContext context)  { vertx.close(...); }

    @Test /*(Bºtimeout=5000º)*/
    public void async_behavior(TestContext context) { // 1
      Vertx vertx = Vertx.vertx();
      Async a1 = context.async();
      Async a2 = context.async(3); // ← works as a countdown that
                                        completes successfully after 3 calls.
      vertx.setTimer(100, n -˃ a1.complete());
      vertx.setPeriodic(100, n -˃ a2.countDown());

    public void crud_operations(TestContext context) {
      Async async = context.async();

      service.createPage(..., ...,
        context.asyncAssertSuccess(v1 -˃ {
            context.asyncAssertSuccess(json1 -˃ {
              async.complete();  // 1
Maven Bootstrap

Ex: minimally viable wiki written with Vert.x

   └ Features:
     - server-side rendering
     - data persistence through a JDBC connection
       and async ddbb access

   └ Dependencies:
     - Vert.x web: "elegant" APIs to deal with routing, request payloads, etc.
     - Vert.x JDBC client: asynchronous API over JDBC.
     - other libreries for HTML/md rendering

                                          ┌ o configured pom.xml:
 $º$ URL="" º   │   - Maven Shade Plugin configured to create a single
 $º$ URL="${URL}/vertx─maven─starter" º ←─┤     "fat" Jar archive with all required dependencies
 $º$ git clone ${URL} project01       º   │   - Exec Maven Plugin to provide the exec:java goal
 $º$ cd project01                     º   │     that in turns starts the application through the
                                          │     Vert.x io.vertx.core.Launcher class.
                                          │     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                          │     (equivalent to running using the vertx cli tool)
                                          │ o sample verticle
                                          │ o unit tests
                                          └ o auto compile+redeploy on code changes.
                                              (adjust $VERTICLE in script to match main verticle)
 $º$ mvn package exec:java           º ← check that maven install is correct

Tip: The SQL database modules supported by the Vert.x project do not currently
   offer anything beyond passing SQL queries (e.g., an object-relational mapper)
   as they focus on providing asynchronous access to databases.
   However, nothing forbids using more advanced modules from the community,
   and we especially recommend checking out projects like this jOOq generator
   for VertX

$ mvn clean package
$ java -jar target/project01-SNAPSHOT-fat.jar
Create HttpServer

   import io.vertx.ext.web.handler.BodyHandler;
   public class MainVerticle ºextends io.vertx.core.AbstractVerticleº {
      private io.vertx.core.http.HttpServer server = vertx.createHttpServer(); // vertx defined in Parent Class

      public void start(Future startFuture) {
          Json.mapper.registerModule(new JavaTimeModule());
          FileSystem vertxFileSystem = vertx.fileSystem();
          vertxFileSystem.readFile("swagger.json", readFile -> {
              if (readFile.succeeded()) {
                  // Swagger swagger = new SwaggerParser().parse(readFile.result().toString(Charset.forName("utf-8")));
                  // SwaggerManager.getInstance().setSwagger(swagger);
                  Router router = Router.router(vertx);
                  router.get  (ROUTE_ENTITY01+"/:id")                              .handler(this:: GetEntity01Handler);
         (ROUTE_ENTITY01       ).handler(BodyHandler.create()).handler(this::PostEntity01Handler);
                                                              └───────┬───────────┘         └───────────┬───────────┘
                                                decode POST req.body (forms,...) to         function signature:
                                                Vert.x buffer objects                       void functionName(ºRoutingContext contextº)
                  router.delete(ROUTE_ENTITY01+"/:id")                              .handler(this::DeleEntity01Handler);
        "Starting Server... Listening on "":"+RC.port);
                           8080,                                  ← Different deployments can share the port. Vertx will round-robin
                           /* AsyncResult */ar -˃ { //
                            if (ar.succeeded()) {
                    "HTTP server running on port 8080");
                            } else {
                              LOGGER.error("Could not start a HTTP server", ar.cause());
              } else {
Reusable Verticles

- resulting verticles will not have direct references to each other
  as they will only agree on destination names in the event bus as well
  as message formats.
- messages sent on the event bus will be encoded in JSON.

- Ex:
@[]      ← its sole purpose is to bootstrap the app
                          and deploy other verticles.

  public class MainVerticle extends AbstractVerticle {
    public void start(Promise promise) {
      Promise promise01 = Promise.promise();
      vertx.deployVerticle(new WikiDatabaseVerticle(), promise01);
      promise01.future().compose(id -˃ {
        Promise promise02 = Promise.promise();
          "",  // <4>
          new DeploymentOptions().setInstances(2),    // <5>
        return promise02.future();
      }).setHandler(ar -˃ {
        if (ar.succeeded()) {
        } else {
VertX Continuation
VertX (TODO)
• RxJava Integration:

• Vertx+k8s:
• SockJS:
  - Event-bus bridge allowing web apps to seamlessly communicate over
  - event bus from JavaScript running in the browser by receiving and publishing
  - messsages just like any verticle would do
• Angular Client:

 Cookbook recipe:
    Vertx vertx = Vertx.vertx();
    HttpServer server = vertx.createHttpServer();
    server.requestHandler(req -˃ {
      req.response().end("Hello world");
  There are many features and modules that we haven’t covered in this guide, such as:
    - Clustering using Hazelcast, Infinispan, Apache Ignite or Zookeeper,
    - Exposing⅋consuming over HTTP/2, possibly (but not necessarily) with gRPC
    - Using NoSQL databases such as MongoDB or Redis.
    - Sending emails over SMTP.
    - Messaging with AMQP, Stomp, Kafka, MQTT or RabbitMQ.
    - Using OAuth2 authentication from custom and popular providers.
    - Vert.x sync for writing blocking-style code that is later
      turned into fibers non-blocking code at runtime.
    - Publishing and discovering micro-services from registries, for
      instance when deploying on cloud environments like OpenShift,
    - Exposing metrics and health checks.

• What's new Vert.X 4.0: (2020-12-09):
Other frameworks
- lightweight system for processing asynchronous jobs.
- Use Cases:
  - website that needs to run batch process in background
  - We receive batch of inputs to be processed in a "best-effort"
    but replay as-soon-as-possible to client that we have the
    input batch ready for processing.

- easy-to-use "Wrapper" on top o Apache Artemis (ActiveMQ "next Gen")
  the Async adds an abstraction layer based on a Command Pattern,
  which makes it trivial to add asynchronous processing.

- Embedded broker instance with reasonable defaults

- Ex:
  Asyncºasyncº= new Async(            ←BºCREATE one or more Queuesº
            "/opt/project1",          ← place to store persistent messages
            new QueueConfig(
               "MESSAGES_QUEUE",      ← queue 1 name
               new CommandListener(),
               5)                     ← number of listeners(threads) to
            new QueueConfig(
                  "ERRORS_QUEUE",     ← queue 2 name (No limit in the number o
                  new ErrorListener(),                queues)

   public class HelloCommand             ←BºCreate a commandº
       extends Command {
       private String message;

       public HelloCommand(String message) {
         this.message = message;

       public HelloCommand() {}          ← Rºnecessary (forbid finals)º

       public void execute() {

   for(int i = 0; i < 100; i++){         ←Bºsending commands asyncº
        new HelloCommand("Hello Number "+ i));
   Output will be similar to
   → Hello Number 0
   → Hello Number 1
   → ...

   List topCommands             ←BºPeek (vs Consume) 3 "top"º
      = ºasyncº.getTopCommands(            commands from "ERROR_QUEUE"
           3, "ERROR_QUEUE");

- Commands can be read and processed synchronously (one-by-one)
  from an individual queue one at the time without a listener.
  Ex: Qºyou do not want to process errors automaticallyº. To do so:

  ErrorCommand errorCommand =           ←BºConsume messageº
  ... // Process manually

BºText vs Binary messagesº
  - To be compatible with JMS the  communication protocol is limited t
    - javax.jms.TextMessage   ← Default mode.
    - javax.jms.BytesMessage  ← async.setBinaryMode(true);

    In both cases, the Rºserialization of a command is first done to XMLº
    with the use of XStream.
  - If a given command has transient field that must NOT  be serialized,
    use the field annotation @XStreamOmitField to ignore it.

  -RºWARN:º Do not switch from mode to mode while having persistent
            messages stored in your queues.

BºCommands with DB accessº
  - If queue processing requires a database connetion, DBCommandListener
    can be used:
    Async async = new Async(filePath, false, new
        QueueConfig("MESSAGES_QUEUE", new
        DBCommandListener(                 ← If JNDI connection is setup, the
         "java:comp/env/jdbc/conn01"), 5)    listener will find and open it
   );                                        Use your tomcat/Jboss/... container
                                             documentation to set up it properly

Bº(Artemis) Config APIº
  - For complex app configuration, the underlying Artemis API
    can be used:
       artemisCOnfig = async.getConfig();

- See also filequeue in this map. It's faster but doesn't support
  Queue to DDBBs.
- KISS alternative using MVStore
- All producers and consumers run within a JVM.
- H2 MVStore DB used for storage.
- Queue items are BºPOJOs serialized into Json using jacksonº.
-Gºfaster than JavaLite due to performance shortcutº:
  -BºFile Queue will transfer queued items directly to consumersº
   Bºwithout hitting the database provided there are consumers  º
   Bºavailable, otherwise, message will be persistedº
-RºDoesn't support persistence to JNDI DDBBº
- fixed and exponential back-off retry is present.

  - maven/gradle package dependency:

  - Implement POJO extending FileQueueItem
  - Implement consume(FileQueueItem) on ˂˂Consumer˃˃ to process items
  - Instantiate a FileQueue object and call config() to configure
  - Call startQueue() to start the queue
  - Call stopQueue() to stop the queue processing
  - Call FileQueue.destroy() to shutdown all static threads (optional)

BºExample Implementation:º
  └ Queue Ussage example:
    FileQueue queue = FileQueue.fileQueue();
    FileQueue.Config config = FileQueue.
          new TestConsumer()
        .maxQueueSize(MAXQUEUESIZE)           // ← queueItem will block until an slot becomes
                                                   available or ExceptionTimeout thrown
        .maxRetries(0);                       // ← Infinite retries
        .persistRetryDelay(                   // ← delay between DDBB scans.
    queue.startQueue(config);                 // ← Start queue
    for (int i = 0; i < ROUNDS; i++)
      queue.queueItem(                        // ← Submit items
        new TestFileQueueItem(i)
    queue.stopQueue();                        // ← stopQueue

  └ Consumer implementation:
    static class TestConsumer implements Consumer {

        public TestConsumer() { }

        public Result consume(FileQueueItem item)
        throws InterruptedException {
                        try {
        TestFileQueueItem retryFileQueueItem =
            (TestFileQueueItem) item;
        if (retryFileQueueItem.getTryCount() == RETRIES )
            return Result.SUCCESS;
        return Result.FAIL_REQUEUE;
                        } catch (Exception e) {
        logger.error(e.getMessage(), e);
        return Result.FAIL_NOQUEUE;

  └ FileQueueItem implementation:

    import com.stimulussoft.filequeue.*;

    static class TestFileQueueItem extends FileQueueItem {
      Integer id;
      public TestFileQueueItem() { super(); };
      private TestFileQueueItem(Integer id) {
 = id;
      public String toString() { return String.valueOf(id); }
      public Integer getId() { return id; }
      public void setId(Integer id) { = id; }

  └ File Caching:
    - If there is the need to cache a file to disk or perform resource
      availability checks prior to items being placed on the queue,
      implement availableSlot() on the QueueCallback interface. This method
      is called as soon as a slot becomes available, just before the item
      is place on the queue. It may be used to cache a file to disk, or
      perform resource availability pre-checks (e.g. disk space check).
Cost of software failures
$312 billion per year: global cost of software bug (2013)
$300 billion dealing with the Y2K problem

$440 million loss by Knight Capital group Inc. in 30 minutes, August 2012
$650 million loss by NASA Mars missions in 1999; unit conversion bug
$500 million Arian 5 maiden flight in 1996; 64-bit to 16-bit conversion bug
"$Nightmare" billion Boeing 737Max

2011: Software caused 25% of all medical device recalls.
Checker framework (Java 8+)
ºfix errors at compile timeº (vs later on at execution/runtime)

º COMPARED TO ALTERNATIVES (SpotBugs, Infer  Jlint, PMD,...)º
                 ┌─────────────┬────────┬────────┐                 ┌──────────────────┬─────────────────────┐
                 │ Null Pointer│        │        │                 │ Verification     │ Bug─Finding         │
                 │    errors   │ False  │Annotat.│                 │ (ºChecker FWº,)  │ (Infer,StopBugs,    │
                 │             │        │        │                 │                  │  SonarQube,...      │
                 │Found│ Missed│warnings│written │  ┌──────────────┼──────────────────┼─────────────────────┤
  ┌──────────────┼─────┼───────┼────────┼────────┤  │Goal          │ prove that       │ find some bugs      │
  │ºChecker FW.º │9    │ 9     │  4     │  35    │  │              │ no bug exits     │ at "low cost"       │
  ├──────────────┼─────┼───────┼────────┼────────┤  ├──────────────┼──────────────────┼─────────────────────┤
  │StopBugs      │0    │ 9     │  1     │  0     │  │Check specifis│ user provided    │ infer likely specs  │
  ├──────────────┼─────┼───────┼────────┼────────┤  │specificat    │                  │                     │
  │Jlint         │0    │ 9     │  8     │  0     │  ├──────────────┼──────────────────┼─────────────────────┤
  ├──────────────┼─────┼───────┼────────┼────────┤  │False         │ None!!!          │ acceptable          │
  │PMD           │0    │ 9     │  0     │  0     │  │negatives     │                  │                     │
  ├──────────────┼─────┼───────┼────────┼────────┤  ├──────────────┼──────────────────┼─────────────────────┤
  │Eclipse 2017  │0    │ 9     │  8     │  0     │  │False         │ manually supress │ heuristics focus on │
  ├──────────────┼─────┼───────┼────────┼────────┤  │positives     │ warnings         │ most important bugs │
  │IntelliJ      │0    │ 9     │  1     │  0     │  ├──────────────┼──────────────────┼─────────────────────┤
  │+@NotNull 2017│3    │ 6     │  1     │ 925+8  │  │Downside      │ user burden      │ missed bugs         │
  └──────────────┴─────┴───────┴────────┴────────┘  └──────────────┴──────────────────┴─────────────────────┘

RºPROBLEM:º                                          │BºSOLUTION:º
   STANDARD JAVA TYPE SYSTEM IS NOT GOOD ENOUGH      │  Java 8+ allows to compile programs
   - Next example compile, but fail at runtime:      │  using Oº"PLUGGABLE TYPE SYSTEMs"º,
     Ex.1:                                           │  allowing to apply stricter checks
       System.console().readLine(); ←RºNullPointerº  │  than default ones in compiler like
     Ex.2:                                           │  Ex:
       Collections.emptyList()                       │  $ javac º-processor NullnessCheckerº
               .add("one"); ←RºUnsupported Operationº│
     Ex.3:                                           │   PLUGABLE TYPE SYSTEM COMPILATION SCHEMA:
       Date key1 = new Date();                       │           (1)           No errors (2)
       myMap.put(key1, "now");                       │    Source ───→ Compiler ────┬───→ Executable
       myMap.get(key1);    ←  returns "now"          │      ^            │         │(2)       ^
       key1.setSeconds(0); ←RºMutate keyº            │      │            v         v          │
       myMap.get(key1);    ←Rºreturns nullº          │      │         Standard  OºOptionalº   │ Guaranteed
                                                     │      │         Compiler  OºType    º───┘ Behaviour
                                                     │      │         Errors    OºChecker º
                                                     │      │                      │
                                                     │      │                      v
                                                     │      └────────────────── Warnings :
                                                     │     (2) plugable type system allows generation
                                                     │         of executable to allow CI continue the
                                                     │         pipeline with further tests (functional
                                                     │         testing, configuration testing, ...)

•º# Checker Framework Instalation #º

(See new releases/versions at

 ºSTEP 01:º                           │ ºSTEP 02:º
  Add next pom.xml dependencies like: │  tweak ºmaven-compiler-pluginº to use
  ˂dependency˃                        │  Checker Framework as a pluggable Type System:
      ˂groupId˃                       │  ˂plugin˃
        org.checkerframework          │    ˂artifactId˃ºmaven-compiler-pluginº˂/artifactId˃
      ˂/groupId˃                      │    ˂version˃3.6.1˂/version˃
      ˂artifactId˃                    │    ˂configuration˃
        checker-qual                  │      ˂source˃1.8˂/source˃
      ˂/artifactId˃                   │      ˂target˃1.8˂/target˃
      ˂version˃2.11.0˂/version˃       │      ˂compilerArguments˃
  ˂/dependency˃                       │        ˂Xmaxerrs˃10000˂/Xmaxerrs˃
  ˂dependency˃                        │        ˂Xmaxwarns˃10000˂/Xmaxwarns˃
      ˂groupId˃                       │      ˂/compilerArguments˃
        org.checkerframework          │     º˂annotationProcessors˃º ← "==" javac -processor ...
      ˂/groupId˃                      │        ˂annotationProcessor˃
      ˂artifactId˃                    │      org.checkerframework.checker.nullness.NullnessChecker
        checker˂                      │         ˂/annotationProcessor˃
      /artifactId˃                    │         ˂annotationProcessor˃
      ˂version˃2.11.0˂/version˃       │      org.checkerframework.checker.interning.InterningChecker
  ˂/dependency˃                       │         ˂/annotationProcessor˃
  ˂dependency˃                        │         ˂annotationProcessor˃
      ˂groupId˃                       │      org.checkerframework.checker.fenum.FenumChecker
        org.checkerframework          │         ˂/annotationProcessor˃
      ˂/groupId˃                      │         ˂annotationProcessor˃
      ˂artifactId˃                    │      org.checkerframework.checker.formatter.FormatterChecker
        jdk8                          │         ˂/annotationProcessor˃
      ˂/artifactId˃                   │     º˂/annotationProcessors˃º
      ˂version˃2.11.0˂/version˃       │      ˂compilerArgs˃
  ˂/dependency˃                       │        ˂arg˃-AprintErrorStack˂/arg˃
                                      │        ˂arg˃-Awarns˂/arg˃
                                      │      ˂/compilerArgs˃
                                      │    ˂/configuration˃
                                      │  ˂/plugin˃

(ºSTEP 03:º Manually add extended type annotations to your java code)

•º# Ussage  #º

- BºAvoiding Nullsº

 ºCHECKS  ON TYPESº                              │ºCHECKS ON FUNCTION DECLARATIONº
                                                 │                   ┌────┬────┬───────────────────────────┐
                                                 │                   │FUNC│FUNC│DESCRIPTION                │
  private static int func1                       │                   │PRE─│POST│                           │
    (º@NonNullº String[] args)                   │                   │COND│COND│                           │
  {                                              │ ┌─────────────────┼────┼────┼───────────────────────────┤
      return args.length;                        │ │@RequiresNonNull │X   │    │variables areºexpectedº to │
  }                                              │ │                 │    │    │be non─null when invoked.  │
                                                 │ ├─────────────────┼────┼────┼───────────────────────────┤
  public static void main                        │ │@EnsuresNonNull  │    │X   │variables areºguaranteedºto│
    (º@Nullableº String[] args) {                │ │                 │    │    │be non─null on return.     │
      ...                                        │ ├─────────────────┼────┼────┼───────────────────────────┤
      func1(args);                               │ │@EnsuresNonNullIf│    │X   │variables areºguaranteedºto│
  }         ^^^^                                 │ │                 │    │    │benon─null on ret.true/fals│
      [WARNING] ... [argument.type.incompatible] │ └─────────────────┴────┴────┴───────────────────────────┘
       incompatible types in argument.           │
       ºfound    : nullº                         │
       ºrequiredº: @Initializedº@NonNullº...     │

- BºConvert String constants into Safe Enum with Fenumº
                                                (Fake enum)
  static final @Fenum("country") String ITALY = "IT";
  static final @Fenum("country") String US = "US";
  static final @Fenum("planet") String MARS = "Mars";
  static final @Fenum("planet") String EARTH = "Earth";

  void function1(@Fenum("planet") String inputPlanet){
      System.out.println("Hello " + planet);

  public static void main(String[] args) {
      obj.greetPlanets(US);   ←----  [WARNING] ...
  }                                   incompatible types in argument.
                                       found   : @Fenum("country") String
                                       required: @Fenum("planet") String

- BºRegular Expressionsº
  @Regex(1) private static String FIND_NUMBERS = "\\d*";
  ^^^^^^^^^                                      ^^^^^^
  Force String variable                       [WARNING] ...
  to store a regex with                       incompatible types in assignment.
  at least one matching                         found   : @Regex String
  group                                         required: @Regex(1) String

- BºValidating tainted (non-trusted) inputº

   String validate (String sqlInput) {
      // Do any suitable checks, throw on error
      @SuppressWarnings("tainting")      ← "swear" that developer got sure
      @Untainted String result = ...;       of input correctness
      return result;

  void execSQL(º@Untaintedº String sqlInput) {

  public static void main(String[] args) {
      obj.execSQL(arg[0]);             ← warning at compile time
      obj.execSQL(validate(arg[0]));   ← "OK". validate un-tain the input

- BºMark as Immutableº
 º@ImmutableºDate date = new Date();
  date.setSeconds(0);   ← Rºcompile-time errorº

-ºAvoiding (certain) concurrency errorsº

  Lock Checker enforces a locking discipline:
  "which locks must be held when a given operation occurs"

                                              │                 ┌────┬────┬───────────────────────────┐
  º@GuardedBy("lockexpr1","lockexpr2",...)º   │                 │FUNC│FUNC│DESCRIPTION                │
             int var1 = ....;                 │                 │PRE─│POST│                           │
   ^^^^^^^^^^                                 │                 │COND│COND│                           │
  a thread may dereference the value referred │┌────────────────┼────┼────┼───────────────────────────┤
  to by var1 only when the thread holds all   ││@Holding        │X   │    │All the given lock exprs   │
  the locks that ["lockexpr1",...] currently  ││(String[] locks)│    │    │are held at method call    │
  evaluates to.                               │├────────────────┼────┼────┼───────────────────────────┤
                                              ││@EnsuresLockHeld│    │X   │Ensures locks are locked on│
                                              ││(String[] locks)│    │    │return,ex. lock adquired by│
                                              ││                │    │    │ReentrantLock.lock().      │
                                              ││@EnsuresLockHeld│    │X   │Ensures locks are locked on│
                                              ││(String[] locks)│    │    │return,ex.lock conditionaly│
                                              ││                │    │    │adquired by ReentrantLock  │
                                              ││                │    │    │.lock()                    │
                                              ││                │    │    │if method return true|false│
  │º@LockingFreeº      │method does NOT acquire│release locks:         │
  │                    │· it is not synchronized,                      │
  │                    │· it contains NO synchronized blocks           │
  │                    │· it contains no calls to lock│unlock methods  │
  │                    │· it contains no calls to methods that are not │
  │                    │  themselves @LockingFree                      │
  │                    │(@SideEffectFree implies @LockingFree)         │
  │º@ReleasesNoLocksº  │· method maintains a strictly                  │
  │                    │  nondecreasing lock hold count                │
  │                    │  on the current thread for any locks          │
  │                    │  held at method call.                         │
  │º@EnsuresLockHeldº  │method adquires new locsk                      │
  │º@EnsuresLockHeldIfº│(default if no @LockingFree│@MayReleaseLocks│  │
  │                    │@SideEffectFree│@Pure used).                   │

-BºFormat String Checkerº
  - prevents use of incorrect format strings in System.out.printf,....

    void printFloatAndInt
         (º@Format({FLOAT, INT})º String Oºformatº)
      System.out.printf(Oºformatº, 3.1415, 42);
-ºI18n Format Checker examplesº
  MessageFormat.format("{0} {1}", 3.1415);
                              second argument missing
  MessageFormat.format("{0, time}", "my string");
                                    cannot be formatted
                                    as Time type.
  MessageFormat.format("{0, thyme}", new Date());
                            unknown format type

  MessageFormat.format("{0, number, #.#.#}", 3.1415);
                              subformat is invalid.

-ºProperty File Checker!!!!º RºTODOº
  -ºIt ensures that used keys are found in the corresponding º
   ºproperty file or resource bundle.º

-ºGUI Effect Checkerº
  - It is difficult for a programmer to remember
    which methods may be called on which thread(s).
    (Main GUI thread or others)
   Checker types the method as if:
   - It accesses no UI elements (and may run on any thread);
   - It may access UI elements  (and must run on the UI thread)

-º(physical) Internation System UNIT annotationsº:
  @Acceleration: Meter Per Second Square @mPERs2
  @Angle       : Radians @radians
                 Degrees @degrees
  @Area        : square millimeters @mm2,
                 square meters @m2
                 square kilometers @km2
  @Current     : Ampere @A
  @Length      : Meters @m
                 millimeters @mm
                 kilometers @km
  @Luminance   : Candela @cd
  @Mass        : kilograms @kg
                     grams @g
  @Speed       : meters per second   @m
                 kilometers per hour @kmPERh
  @Substance   : Mole @mol
  @Temperature : Kelvin @K
                 Celsius @C
  @Time        : seconds @s
                 minutes @min
                 hours @h

-º@Unsigned/@Signedº← guarantees values are not mixed

-ºtype alias or typedefº
  share same representation as another type
  but is conceptually distinct from it.
  Ex 1: get sure that Strings representing addresses
        and passwords are NOT mixed
  Ex 2: get sure that integers used for meters are
        not mixed with integers used for centimeters.

  @NonNull List˂String˃
  List˂@NonNull String˃
  @Regex String validation = "(Java|JDK) [7,8]"

  private String getInput(String parameterName){
   final String retval = @Tainted request.getParameter(parameterName);
   return retval;

  private void runCommand(@Untainted String… commands){
   // the previously tainted String must be validated before being passed in here.
   ProcessBuilder processBuilder = new ProcessBuilder(command);
   Process process = processBuilder.start();

SpotBugs @[] • OOSS static analysis tool for java code bugs. • Well maintained (as of 2021-12-21) • "spiritual successor of FindBugs" • SpotBugs checks for more than 400 bug patterns. • Works from GUI+cli, maven/gradle/eclipse integration. • Plugin extension support (just download plugin jar and it will be detected and included): • fb-contrib: • Security Audits for Java Web applications: @[] It can detect 141 different vulnerability types with over 823 unique API signatures. • Running SpotBugs. · Presetup. Compile java code to classes or jars. SpotBugs runs against compiles class files, using source code as a reference when displaying output. $ java -jar ../spotbugs.jar ...$SPOTBUG_ OPTIONS .. JVM OPTIONS =========== -Xmx1500m ← set JVM heap to big/1500MB (recomended) STANDARD OPTIONS ================ -textui ← vs -gui -effort min ← := min|less|default|more|max min: decrease mem use/precision/exec.time max: increase mem use/precision/exec.time -project ... \ ← project *.fb or *.fbp created through the GUI (fb == FindBugs) -pluginList jar1;jar2 \ -home $SPOTBUG_DIR ← ex: /opt/spotbugs -adjustExperimental ← Lower priority of experimental Bug Patterns. -workHard ← Ensure analysis effort is at least ‘default’. -sortByClass=dir1/spotbugs.txt ← textui only also supported to set multiple reports -include filter01.xml ← show only bugs match filter specified. -exclude filter02.xml ← *1 -onlyAnalyze*,* ← Unlike filter, analysis is skipt for any other class type WARN: some detectors may produce inaccurate results -low ← Report all bugs. -medium ← Report medium and high priority bugs. -high ← Report only high priority bugs. -relaxed ← suppress heuristics, avoidin false positives. -html=../report.html ← Output HTML. Alternative: ← Output HTML. Alternative: RºNOTEº: It is -html=value while other flags work like -flag value (without the "=" sign) -html:fancy.xsl=... (DOM+JS for navigation + CSS) -html:fancy-hist.xsl=... fancy.xsl evolution Other output formats include xml/sarif/emacs/xdocs -nested false ← disable scanning of nested jar (def:enabled) -auxclasspath ... ← It should include all jar/dirs containing classes that are part of the program being analyzed but you do not want to have analyzed for bugs. -auxclasspathFromInput ← Read auxclasspath from STDIN, line by line -auxclasspathFromFile -analyzeFromFile fileI ← Read input file list from file line-by-line. -showPlugins ← listavailable detector plugins. OUTPUT CUSTOMIZATION OPTIONS ============================ -timestampNow ← Set results timestamp to current time. -quiet ← Suppress error messages. -longBugCodes ← Report long bug codes. -progress ← Display progress in console. -release $name ← Set release name in report -maxRank $rank ← Only report issues with a bug rank at least as scary as that provided. -dontCombineWarnings ← Don’t combine warnings differing only in line num. -train[:outputDir]: ← Save training data (experimental); -useTraining[:inputDir]: ← Use training data (experimental); -redoAnalysis $filenam ← Redo using config. from previous analysis. -sourceInfo $file ← Specify source info file (line numbers for fields/classes). -projectName $name ← Descriptive name of project. -reanalyze $filename ← Redo analysis in provided file. OUTPUT FILTERING OPTIONS ======================== -bugCategories cat1,cat2 ← Only report bugs in those categories. -excludeBugs baseline_bug ← Exclude bugs that are also reported in baseline xml output. -applySuppression ← Exclude bugs matching suppress.filter from *fbp DETECTOR (VISITOR) CONFIGURATION OPTIONS ======================================== -visitors v1,v2,... ← Run only named visitors. -omitVisitors v1,v2,... ← Omit named visitors. -chooseVisitors +v1,-v2,. ← enable/disable detectors. -choosePlugins +p1,-p2, ← Selectively en/dis-able plugins. -adjustPriority v1=raise|lower,v2=... PROJECT CONFIGURATION OPTIONS ============================= -sourcepath $source_path ← Set source path for analyzed classes. -exitcode ← Set exit code of process. -noClassOk ← Output empty warning file if no classes are specified. -xargs ← Get list of class/jar files from STDIN -bugReporters name,-name2,.. ← Bug-reporter decorators to explicitly enable/disable. -printConfiguration ← Print configuration and exit *1┌── myIncludeOrExcludeFilter.xml ──── │ │ ˂?xml version="1.0" encoding="UTF-8"?˃ │ ˂FindBugsFilter˃ │ ˂Match˃ │ ˂Bug match if pattern and/or code and/or category match │ pattern="..." ← comma-separated list of patterns to match │ ex.: DLS_DEAD_LOCAL_STORE,DM_EXIT, │ code="..." ← coarse-grained matching comma-sep. list of bug │ abbreviations(DC,DE,IC,IJU,MS,SIC,URF,UUF,XYZ,...) │ category="..." ← even more coarse-grained := │ CORRECTNESS, BAD_PRACTICICE, │ /˃ PERFORMANCE, STYLE, MT_CORRECTNESS │ (M)ulti(T)hreaded ┘ │ │ │ ˂Confidence value="1"/˃ ← 1 match high-confidence warnings, │ 2 match normal-confidence warnings │ 3 match low-confidence warnings │ │ ˂Rank value="1" /˃ ← 1 to 4 : scariest │ 5 to 9 : scary │ 10 to 14: troubling │ 15 to 20: concern │ │ ˂Package name="~.."/˃ ← name/regex. Nested packages are NOT included │ ˂Class name="~.. /˃ ← name/regex. NOTE: Some bug instances relate to │ 2+ classes. │ ˂Source name="..."/˃ ← name/regex. match warnings associated to source file. │ └────┬────┘ │ regex if prefixed by ~ │ │ ˂Method │ name="funcXXX" │ params="int,..." │ returns="void" │ /˃ │ │ ˂Field name="..." /˃ ← type=... instead of name can also be used │ ˂Local name="..." /˃ │ ˂Type name="..." /˃ │ ˂/Match˃ │ │ ˂Or˃ ˂Match /˃˂Match /˃˂/Or˃ │ ˂And˃ ˂Match /˃˂Match /˃˂/And˃ │ ˂Not˃ ˂Match /˃ ˂/Not˃ │ │ ˂/FindBugsFilter˃ └──────────────────────────────────────────── • See examples at: @[]
Lint4j @[] •RºWARNº: Not maintained any more. • Lint4j ("Lint for Java") is a static Java source and byte code analyzer that detects locking and threading issues, performance and scalability problems, and checks complex contracts such as Java serialization by performing type, data flow, and lock graph analysis. • Ussage @[] $º$ lint4j \ º $º -sourcepath src/main \ º ← analyze source $º -classpath lib/bcel.jar:... \ º $º -exclude "packagename" \ º ← package or package-prefix $º "com.jutils.lint4j.*" º $º º $º$ lint4j -sourcepath .../log4j.jar º ← analyze binary $º "org.apache.*" º $º º $º$ lint4j \ º $º -sourcepath ./build/log4j.jar º ← analyze 2 package in jar $º org.apache.log4j \ º $º org.apache.log4j.spi º $º º $º$ lint4j \ º $º -sourcepath com/.../ \ º $º com/.../ º
• SonarQube empowers all developers to write cleaner and safer code.
• Comunity with 200K+ dev. teams.
• Easely integrates with CI/CD pipelines.
  Jenkins, GitHub Actions, Bitbucket Pipelines, GitLab CI, Azure Pipelines, ...

• A simple client/server working enviroment can be setup in minutes like:

  ┌── SERVER SIDE: ────────────────────────────────
  │  ======================
  │$º$ docker run -d --name sonarqube               \ º
  │$º  -v $(pwd)/sonarqube_data:/opt/sonarqube/data \ º
  │$º  -p 9000:9000 sonarqube:latest             º
  │• Create a new project through the web console
  │  and annotate the 40-random-chars TOKEN.
  │  (Needed by client-side sonar-scanner).

  ┌─ CLIENT SIDE: ──────────────────────────────────
  │  ┌ └ ─┐
  │  │sonar.projectKey=my:project  ← Required
  │  │                             │
  │  │#sonar.projectName=...       ← def:project key
  │  │#sonar.projectVersion=1.0    ← def:'not provided'
  │  │sonar.sources=src/java/      ← relative to
  │  │                             │
  │  │#sonar.sourceEncoding=UTF-8  │
  │  │ ← compiled *class dir.
  │  │     ← ex: /lib/*.jar,./plugins/lib/*jar
  │  └─────────────────────────────┘
  │• 2) launch client scanner like
  │  2.alt1) using mvn plugin:
  │    $º$ mvn clean verify sonar:sonar \       º
  │    $º    -Dsonar.projectKey=test2 \         º
  │    $º \ º
  │    $º    -Dsonar.login=$PROJECT_TOKEN       º
  │  2.alt2) using docker
  │    $º $ docker run \                        º
  │    $º   --rm \                              º
  │    $º   -e SONAR_HOST_URL="http:...:9000" \ º
  │    $º   -e SONAR_LOGIN="$PROJECT_TOKEN" \   º
  │    $º   -v "${YOUR_REPO}:/usr/src" \        º
  │    $º   sonarsource/sonar-scanner-cli       º

• See original source for more info about
   SonarScanner Troubleshooting recipes, advanced docker config,
   running,ºcaching scanner filesº,  usng self-signed certs:
  *1: not needed if the associated project is already linked to a
      git repository on the server side.

• alternatives to SonarQube include:
  - Facebook Infer @[]
    (Static analysis Java/C/...)
  - Scrutinizer:
  - StopBugs:
  - Eclipse Static Code Analasys:
    Eclipse → Properties → Java → Compiler → Errors/Warnings → Null analysis:
      Null pointer access
      Potential null pointer access
      Redundant null check:
        x Include 'assert' in null analysis
        x Enable annotation-based null analysis
          Violation of null specification
          Conflict between null annotations an null inference
          Unchecked conversion from non-annotated type to @NonNull type
          Problems detected by pessimistic analysis fro free type variables
          Unsafe "@Nonnull" interpretation of the free type variable from library
          Redundant null anotation:
          "@NonNull" parametere not annotated in overriding method
          Missing "@NonNullByDefault" annotation on package
          x Use default annotations for null specifications (configure)
          x Inherit null annotations
          x Enable syntatic null analisys for fields
      x Treat above errors like fatal compile erros (make compiled code not executable)

-----------------------+-------------------------------------+-------------------------+-------   Example
Tag⅋ Parameter         | Usage                               | Applies to              | Since    /**
-----------------------+-------------------------------------+-------------------------+-------    * Short one line description.
@authorJohn Smith      | Describes an author.                | Class, Interface, Enum  |           * 

-----------------------+-------------------------------------+-------------------------+------- * Longer description. ... @versionversion | Provides software version entry. | Class, Interface, Enum | * | Max one per Class or Interface. | | *

-----------------------+-------------------------------------+-------------------------+------- * And even more explanations to follow @sincesince-text | Describes when this functionality | Class, Interface, Enum, | * in consecutive paragraphs | has first existed. | Field, Method | * -----------------------+-------------------------------------+-------------------------+------- * @author John Bla @seereference | Provides a link to other element | Class, Interface, Enum, | * @param variable Description .... | of documentation. | Field, Method | * @return Description .... -----------------------+-------------------------------------+-------------------------+------- */ @paramname descrip | Describes a method parameter. | Method | public int methodName (...) { -----------------------+-------------------------------------+-------------------------+------- // method body with a return statement @return description | Describes the return value. | Method | } -----------------------+-------------------------------------+-------------------------+------- @exceptionclass desc | Describes an exception that may | Method | -----------------------+-------------------------------------+-------------------------+------- @throwsclass desc | be thrown from this method. | | -----------------------+-------------------------------------+-------------------------+------- @deprecated descr | Describes an outdated method. | Class, Interface, Enum, | | | Field, Method | -----------------------+-------------------------------------+-------------------------+------- {@inheritDoc} | Copies the description from the | Overriding Method | 1.4.0 | overridden method. | | -----------------------+-------------------------------------+-------------------------+------- {@linkreference} | Link to other symbol. | Class, Interface, Enum, | | | Field, Method | -----------------------+-------------------------------------+-------------------------+------- {@value#STATIC_FIELD} | Return the value of static field. | Static Field | 1.4.0 -----------------------+-------------------------------------+-------------------------+------- {@codeliteral} | Formats literal text in the code | Class, Interface, Enum, | 1.5.0 | font. It is equivalent to | Field, Method | | {@literal} | Class, Interface, Enum, | 1.5.0 -----------------------+-------------------------------------+-------------------------+------- {@literalliteral} | Denotes literal text. The enclosed | Field, Method | | text is interpreted as not | | | containing HTML markup or nested | | | javadoc tags. | | -----------------------+-------------------------------------+-------------------------+-------

• (test scoped) dependencies:

• JUnit test summary:

  @DisplayName("Display name Class Level")
  class JUnitAPISummaryTest {
                          // testing Life-Cycle methods:
    @BeforeEach { ... } // ← executed before each @Test in class
    @AfterEach  { ... } // ← executed after  each @Test in class
    @BeforeAll  { ... } // ← executes before all tests.
    @AfterAll   { ... } // ← executes after  all tests.

    @DisplayName("Test parameters with nice names")
    @ParameterizedTest(name = "Use the value {0} for test")
    @ValueSource(insts = { -1, -4 })
    void test01( int number ) {
      Assumptions.assumeTrue (...);            // ← failed assumption aborts test
      Assumptions.assumeFalse(...);            //   Continuing execution will fail.
                                               //   Example: Initial state is not expected one

                                               // Frequently Used:
      Assertions.assertTrue   (param1);        // or assertFalse
      Assertions.assertNull   (param1);        // or assertNotNull
      Assertions.assertEquals (param1,param2); // or assertNotEquals
      Assertions.assertNotSame(param1,param2);         ("code must not be reached");

      assertAll("check ...",                   //  grouped
        () -˃ assertEquals(..),
        () -˃ assertEquals(..),
        () -˃ assertEquals(..)

                                               // Collections:
      Assertions.assertArrayEquals   (array1, array2, "...");
      Assertions.assertIterableEquals(list1, list2);

      Assertions.assertTimeout(                // Timeouts:
        Duration.ofMillis(100), () -˃ {
          return "result";

      Throwable exception =                    // Assert exception thrown
           () -˃ {
             throw new IllegalArgumentException("...");

    @Test @Disabled   ...                      // Conditional execution
    @Test @EnabledOnOs({ OS.LINUX }) ...
    @Test @DisabledIfSystemProperty(named = "ci-server", matches = "true")
    @Test @EnabledIfEnvironmentVariable(named = "ENV", matches = "test-env")

      value = 9,
      name = "{displayName}-{currentRepetition}/{totalRepetitions}")
   void valuesCannotPassTen(RepetitionInfo info) { ...

    name = "Test fruit \"{0}\" with rank {1}")
           "'string1', 1",         // ← Repeat test with different input
           "'string2', 2",
   void testWithCsvSource(String fruit, int rank) {
       assertNotEquals(0, rank);

• Testsuites: run tests in multiple test classes and/or different packages.
  public class JUnit5TestSuiteExample
  { }

• See also: @[]

• AsserJ (Fluent Assertions) is composed of several modules:  [TODO]
  - core      module: assertions for JDK types (String, Iterable, Stream, Path, File, Map...)
  - Guava     module: assertions for Guava types (Multimap, Optional...)
  - Joda Time module: assertions for Joda Time types (DateTime, LocalDateTime)
  - Neo4J     module: assertions for Neo4J types (Path, Node, Relationship...)
  - DB        module: assertions for relational database types (Table, Row, Column...)
  - Swing     module provides a simple and intuitive API for functional testing of Swing user interfaces

  import static org.assertj.core.api.Assertions.*;
  assertThat(frodo.getName()).isEqualTo("Frodo"); //  ← basic assertions

  assertThat(frodo.getName())                     // ← chaining string specific assertions

  assertThat(fellowshipOfTheRingList)            // ← collection specific assertions
     .hasSize(9)                                 //   (there are plenty more)
     .contains(frodo, sam)

     .as("check %s's age", frodo.getName())     // ← as() used to describe the test
     .isEqualTo(33);                            //    will be shown before the error message

  assertThatThrownBy(() -˃ {                    // ← exception assertion ( standard style)
     throw new Exception("boom!"); })
  Throwable thrown = catchThrowable(() -˃ {     // ← exception assertion  ( BDD style)
     throw new Exception("boom!");

      .extracting(TolkienCharacter::getName)    // ← 'extracting' feature on Collection
      .doesNotContain("Sauron", "Elrond");      //

     .extracting("name", "age", "")    // extracting multiple values at once grouped in tuples
        tuple("Boromir",   37, "Man"   ),
        tuple("Sam"    ,   38, "Hobbit"),
        tuple("Legolas", 1000, "Elf"   ) );

    .filteredOn(                              // ← filtering before asserting
      fellow -˃ fellow.getName().contains("o")
    .containsOnly(aragorn, frodo);

    .filteredOn(                              // combining filtering and extraction
      fellow -˃ fellow.getName().contains("o")
    .containsOnly(aragorn, frodo)
       fellow -˃ fellow.getRace().getName())
    .contains("Hobbit", "Elf");

  // and many more assertions:
  // iterable, stream, array, map, dates, path, file, numbers, predicate, optional ...

Property Testing • A property testing is just something like: for all (x, y, ...) such as precondition(x, y, ...) holds property(x, y, ...) is true It checks that a function/program/whatever is under test abides by a property. Most of the time, properties do not have to go into too much details about the output, they just have to check for useful characteristics that must be seen in the output. • Property based testing has become quite famous in the functional world. Mainly introduced by QuickCheck framework in Haskell, it suggests another way to test software. IT TARGETS ALL THE SCOPE COVERED BY EXAMPLE BASED TESTING: (UNIT TESTS TO INTEGRATION TESTS). • Available automated test technics: @[] △ • Random full ┆ - Fuzzing △ ┆ - Monkey testing ┆ ┆ Input ┆ • Static Analysis scope ┆ - Mem. leaks • Example based covered ┆ - Unitialized mem. - Unit tests ┆ ┆ - Nulls. - QA test ▽ ┆ - Threading issues - UI tests partial┆ - ... ─┼─╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶╶▷ Low ← Feature compliance → High • Extracted from @[]. List of Java libs for property testing : "FunctionalJava's QuickCheck module". FunctionalJava. 2015-08-14 "Quickcheck for Java". 2011-12-09 "JCheck". JCheck. 2011-12-09 "junit-quickcheck". junit-quickcheck. 2013-07-03 "jqwik for the JUnit5 Platform". jqwik. 2017-06-19 "Quick Theories property tests Java 8" 2017-10-30 "jetCheck prop.-based tests Java 8" (JetBrains) 2018-07-10 • e.g. test using 'jqwik': """ fizzBuzz() must return "Fizz" for every divisible-by-3 input """ → PRECONDITION: Consider inputs in [1 ... 100] divisible by 3 → POSTCONDITION: fizzBuzz() returns "Fizz" import java.util.*; import*; import net.jqwik.api.*; class FizzBuzzTests { String fizzBuzz(int i) { // ← function to test boolean div3 = (i % 3 == 0), div5 = (i % 5 == 0); if (div3⅋⅋ div5) return "FizzBuzz"; if (div3) return "Fizz"; if (div5) return "Buzz" return String.valueOf(i); } @Provide Arbitrary˂Integer˃ divisibleBy3() { // ← Precondition return Arbitraries.integers() // input to function .between(1, 100) // divides by 3. .filter(i -˃ i % 3 == 0); } final List˂String˃ IN_OUT = IntStream.range(1, 100) .mapToObj((int i) -˃ ) .collect(Collectors.toList()); @Property // ← Test to execute boolean divBy3_starts_with_Fizz( @ForAll("divisibleBy3") int i) { // ← "inject" precondition return IN_OUT .get(i - 1) .startsWith("Fizz"); // ← check Postcondition } }
Amazon CodeGuru
- Powered by IA.
- CodeGuru consists of two components
– Amazon CodeGuru Profiler:
helps developers find an application’s most expensive lines
of code along with specific visualizations and recommendations
on how to improve code to save money.
- Amazon CodeGuru Reviewer:
helps enhance the quality of code by scanning for critical issues,
identifying bugs, and recommending how to remediate them.

  ┌→ Write Code
  |    |
  |    v
  |  Review Code  ← CodeGuru Reviewer
  |    |
  |    v
  |  Test App     ← CodeGuru Profiler
  |    |
  |    v
  |  Deploy App
  |    |
  |    v
  |  Run App      ← CodeGuru Profiler
  |    |

- Profiler supports application written
in Java virtual machine (JVM) languages such as Clojure,
JRuby, Jython, Groovy, Kotlin, Scala, and Java.
- Reviewer’s bug-fixing recommendations currently support
Java code stored in GitHub, AWS CodeCommit, or Bitbucket.
- (compiler) checked vs unchecked (Error, RuntimeException and their subclasses).
- Checked: All except Error, RuntimeException and their subclasses
- Error: Exceptional conditions external to the application.
└─ java.lang.Throwable   ← Only instances of this (sub/)class are thrown
│                       in JVM, can be thrown in throw statement or can
│                       be an argument in catch clause.
├─   java.lang.Exception
│    │
│    ├─Oºjava.lang.RuntimeExceptionº(non─checked)  ← Most common error raised by
│    │                                               developer code
│    │
│    └─  java.lang.Exception        (checked ─A)   ←RºDon't useº. checked exceptions end up
│                                                     being converted to Runtime Excep.
│                                                     and bloats the code.
└─   java.lang.Error                (non─checked)  ← serious problems that app code
                                                    should not try to catch.
                                                    ThreadDeath error, though a "normal" condition,
                                                    is also a subclass of Error because most apps
                                                    should not try to catch it.

ºDump Exception stack trace to STDERR:º
StringWriter writer = new StringWriter();
PrintWriter printWriter = new PrintWriter( writer );
e.printStackTrace( printWriter );

"Optional": Avoid Nulls
import java.util.Optional;
Optional optional = Optional.ofNullable(a); // ← Create an optional ( s -˃ "RebelLabs:" + s);               // ← Process the optional
optional.flatMap( s -˃ Optional.ofNullable(s));      // ← map a function that retunrs Optional
optional.ifPresent(System.out::println);             // ← run if the value is ther

optional.get();                                      // ← Alt 1: get the value or throw an exception
optional.orElse("Hello world!");                     // ← Alt 2: get the value or default

optional.filter( s -˃ s.startsWith("RebelLabs"));    // ← return empty Optional if not satisfied

• Fault tolerance and resilience patterns for the JVM
• lightweight, zero-dependency library for handling failures
  in Java 8+.
• It works by wrapping executable logic with one or more
  resilience policies, which can be combined and composed
  as needed.
• Current policies include:
  · Retry
  · CircuitBreaker
  · RateLimiter
  · Timeout
  · Fallback.

• Current features include:
  · Async Execution
  · Event Listeners
  · Execution Context
  · Execution Cancellation
  · Standalone Execution
  · Strong Typing
  · Extension Points
JSR Annotations forºDefect Detectionº
Type Annotations
( TODO: Compare how it compares/overlaps CheckerFramework )

 º@NonNullº     compiler can determine cases where a      │º@(Un)Taintedº         Identity types of data that should
                code path might receive a null value,     │                       not be used together, such as remote
                without ever having to debug a            │                       user input being used in system
                NullPointerException. The compiler        │                       commands, or sensitive information in
                just print a warning, but it              │                       log streams
                continues to compile!!!                   │
                                                          │º@mº                   Units of measure ensures that numbers
 º@ReadOnlyº    compiler will flag any attempt to         │                       used for measuring objects are used
                change the object. This is similar to     │                       and compared correctly, or have
                Collections.unmodifiableList, but         │                       undergone the proper unit
                more general and verified at compile time.│                       conversion.
 º@Regexº       Provides compile-time verification        │º@FunctionalInterfaceº indicates that the type declaration
                that a String intended to be used as      │                       is intended to be a functional
                a regular expression is a properly        │                       interface, as defined by the Java
                formatted regular expression.             │                       Language Spec.

└ ºExamplesº:
  @NonNull List˂String˃                              ← A non-null list of Strings.
  List˂@NonNull String˃                              ← A list of non-null Strings.
  @Regex String validation = "(Java|JDK) [7,8]"      ← Check at compile time that this String is a valid regular expression.
  private String getInput(String parameterName){     ← The object assigned to retval is tainted and not for use in sensitive operations.
    final String retval =
      @Tainted request.getParameter(parameterName);
    return retval;

  private void runCommand(@Untainted String… commands){            Each command must be untainted. For example, the previously
    ProcessBuilder processBuilder = new ProcessBuilder(command);   tainted String must be validated before being passed in here.

    Process process = processBuilder.start();
• testing framework for Behaviour-Driven Development (BDD).
• BDD is an evolution of test-driven development (TDD) and acceptance-test
  driven design, and is intended to make these practices more
  accessible and intuitive to newcomers and experts alike. It shifts
  the vocabulary from being test-based to behaviour-based, and
  positions itself as a design philosophy.

• BDD Summary:
  1) Write story
    Scenario: A trader is alerted of status
    Given a stock and a threshold of 15.0
    When stock is traded at 5.0
    Then the alert status should be OFF
    When stock is traded at 16.0
    Then the alert status should be ON

  2) Map to java

  3) Configure Stories

  4) Run Stories
OpenAPI: contract-driven Dev
- "Contract Driven Development" (or API Design First approach) is a methodology
  that uses declarative API Contracts to enable developers to efficiently design,
  communicate, and evolve their HTTP APIs, while automating API implementation
  phases where possible.
MVN Summary
• External Links
  - Artifact Search Engine: @[]
  - Online doc from Maven : @[]
    Ex: Show Bouncy Castle doc by:
    - full index   : @[]
    - by package   : @[]

•ºmaven lifecycleº: execute a set of goals IN ORDER:
    mvn Bºpackageº  ┌ resources:resources     ←··· goal: unit of work. plugins implementing differet goals
          └──┬──┘   │ compiler:compile             accept parametrization customizing its run-time behavior:
             └──────┤ resources:testResources
                    │ compiler:testCompile
                    └ surefire:test jar:jar

•ºmvn Usssageº:
  All mvn commands take the pom.xml project definition as input.
  First time mvn is executed will be slow, since lot of libraries must be downloaded.

$º$ QS=maven-archetype-quickstart \º
$º$ mvn archetype:generate         º ← create new project skeleton:
$º$   -DgroupId=my.groupId \       º   ./pom.xml
$º$   -DartifactId=myArtifact \    º   ./src/test/java/my.groupId/
$º$   -DarchetypeVersion=1.4       º   ./src/main/java/my.groupId/
$º$   -DinteractiveMode=false      º   ./src/main/resouces/ (./META-INF/MANIFEST.MF, ./images/, ... )
$º$   -DarchetypeArtifactId=$QS    º   ./src/test/resources/ ← ex: InputStream is = getClass()
                                       └──────┬───────┘                           .getResourceAsStream("/";
                                     To add another non-standard (old code/tool-generated/...)
                                     directories add next lines to pom.xml:
                                    +  ˂sourceDirectory˃src/main/generated_java˂/sourceDirectory˃
                                    +  ˂testDirectory˃src/functionalTest/java˂/testDirectory˃

                                       EXECUTING LIFECYCLES (ordered lists of goals)
$º$ mvn clean                      º ← Remove ./target folder
$º$ mvn compile                    º ← ... → compile app and tests code
$º$ mvn test-compile               º ← ... → compile tests only.
$º$ mvn test                       º ← ... → test (the surefire plugin executing test will search for
                                             **/(*Test|Test*|*TestCase).java inside ./src/main/test/*

$º$ mvn  -Dexec.args="arg0 ..."  \ º ← Execute Java program with the help of maven.
$º   exec:java \                   º   mvn will take care of setting complex class path properly
$º  º   (discouraged in production).

$º$ mvn clean package              º ← clean→resources→compile→test→package ( generates JAR/WAR/... package)
                                       In detail:
                                       1) resources:resources       4) compiler:testCompile
                                       2) compiler:compile          5) surefire:test jar:jar
                                       3) resources:testResources

$º$ mvn clean install              º ← clean→compile→test→package→install_local
                                       (Use -Dmaven.test.skip=true to skip (slow) testing )

$º$ mvn clean deploy               º ← clean→compile→test→package→install_local→install_pub
                                                                                ("corporate") server
$º$ mvn                            º ← Default to next goals: (RºWARN: no clean)
  └────────┬────────────┘              1) process-resources  4) process-test-resources 7) prepare-package
           ·                           2) compile            5) test-compile           8) package
           ·                           3) process-classes    6) test                   9) install
  ºmvn common Common options includeº:
  -U                   Force library (download) update. Fix problems with           [troubleshooting]
                       unstable networks, unstable package releases.
  -P myProfileX        Execute profile myProfileX defined in pom.xml
  -o                   offline mode. Search deps in local repo.
  -Dgenerate.pom=true  Generates the pom locally for an artefact when installing
                       and compiling. Very useful to make offilne mode work properly.
  help:active-profiles   : List project|user|global-scoped active profile for the build
  help:effective-pom     : Displays effective POM for current build                      [debugging]
  help:effective-settings: Prints calculated settings                                    [debugging]

$º$ mvn fr.jcgay.maven.plugins:buildplan-maven-plugin:list \ º ← show ordered-by-phase list
$º  -Dbuildplan.tasks=install     º                              of goals executed (Very useful to
                                                                 skip slow/non-important goals like
                                                                 doc, style-checks,...)

$º$ mvn resources:resources \     º ← quick local install (bypass tests/style-checks/...)
$º   compiler:compile \           º
$º   jar:jar \  install:install   º

                                                                           EXTRACT PACKAGE INFO
$º$ mvn help:evaluate -q -DforceStdout -Dexpression=project.artifactId º ← Artifact ID
$º$ mvn help:evaluate -q -DforceStdout -Dexpression=project.groupId    º ← Extract Group ID
$º$ mvn help:evaluate -q -DforceStdout -Dexpression=project.version    º ← Extract Version

•ºAdding local jar dependencyº (vs. maven central repository)
  RºWARNº: discouraged, but sometimes needed
    ˂groupId˃   ...  ˂artifactId˃...  ˂version˃...˂scope˃...

• GENERATE FAT JAR (jar with all dependencies included)º

      ˂id˃fatjar˂/id˃      ←····ºSTEP 1º: create 'fatjar' profile with customized maven-assembly-plugin
     º˂activation˃º             ºSTEP 2º: Exec $º$ mvn ... '-P fatjar' º to activate this profile
     º  ˂property˃ ˂name˃fatjar˂/name˃ ˂/property˃º
          ·   ˂artifactId˃maven-assembly-plugin˂/artifactId˃
          ·   ˂version˃3.0.0˂/version˃
          ·  º˂configuration˃º
          ·      ˂descriptorRefs˃
          ·          ˂descriptorRef˃fatjar˂/descriptorRef˃
          ·      ˂/descriptorRefs˃
          ·      ˂archive˃
          ·          ˂manifest˃
          ·              ˂mainClass˃com.myComp.myApp˂/mainClass˃
          ·          ˂/manifest˃
          ·      ˂/archive˃
          ·  º˂/configuration˃º
          ·   ˂executions˃
          ·      º˂execution˃               º
          ·      º    ˂id˃make-assembly˂/id˃º˂!-- this is used for inheritance merges --˃
          ·      º    ˂phase˃package˂/phase˃º˂!-- bind to the packaging phase --˃
          ·      º    ˂goals˃               º
          ·      º        ˂goal˃single˂/goalº˃
          ·      º    ˂/goals˃              º
          ·      º˂/execution˃              º
          ·   ˂/executions˃

•ºSNAPSHOT VERSIONINGº (Future version)
A:☞A snapshot version is one that has not been released (Oºfuture releaseº).
   The idea is that ºbeforeº a "1.0" release is done, there exists
   a 1.0Oº-SNAPSHOTº. That version is what might become 1.0. It's
   basically Oº"1.0 under development"º. This might be close to a real
   1.0 release, or pretty far (right after the 0.9 release, for ex.)

   The difference between a "real" version and a snapshot version is
   that ºsnapshots might get updatesº.  That means that downloading
   1.0-SNAPSHOT today might give a different file than downloading it
   yesterday or tomorrow.
   In contrast OºReleased versions are inmutablesº:
   updates to "1.0.0" requires new version "1.0.1".

   Snapshot dependencies should only exist during development.
  ºReleased versions (i.e. no non-snapshot) should NEVER have aº
  ºdependency on snapshotsº


pom utils (Cleaner/Updater/...) @[] Utilities to clean, organize, and restructure Maven POMs. [qa] •ºPOM Cleanerº: "Cleans up" single POM, normale plugin and dependency specifications, ºCONVERT HARDCODED VERSIONS TO PROPERTIES,º ºCONSITENTLY ORDER TOP-LEVEL ELEMENTSº (with pretty-printing output). •ºVERSION UPDATER:º Updates the version for a set of POMs to a specified|next-sequential version. •ºDependency Check:º - find dependencies that are specified but unused. - find dependencies used but unspecified (TRANSITIVE DEPENDENCIES THAT SHOULD BE DIRECT). •ºMVND(aemon)º [performance][TODO] @[] - Study driven by Gradle shows Maven as being Rºup to 100 times slower than gradle buildsº. - JIT compiled classes are cached. - multi process if needed. - pretty small: ~4060 lines of Java code. - mvnd speed gains: - 1/2 modules: ~ x7/x10 faster - big projects: ~ x6 faster (ex:Camel Quarkus 1242 modules) - Who-is-Who: - Guillaume Nodet (project creator) - Peter Palaga:main contributor
pom.xml Summary @[] ˂project xmlns="" xmlns:xsi="" xsi:schemaLocation=""˃ ˂modelVersion˃4.0.0˂/modelVersion˃ ˂name˃MyPackageDescriptiveName˂/name˃ ˂url˃˂/url˃ ˂groupId˃com.mycomp.groupid˂/groupId˃ ┐ PROJECT COORDINATES: Reusable in pom as ˂artifactId˃myArtifactId˂/artifactId˃ ├ ← ${project.groupId}, ${project.artifactId}, ${project.version} ˂version˃1.0─SNAPSHOT˂/version˃ ┘ version follows: MAJOR.MINOR.PATCH REF: @[] ˂packaging˃jar˂/packaging˃ ← := jar│war│ear│pom│maven─plugin│ejb│rar│par│aar│apklib│... ┌ ˂properties˃ ←··························· Reusable propeties: │ ... @[] │ ˂junit.ver˃3.8.1˂/junit.ver˃ ← Best pattern. Group all dependency versions in properties │ ˂guava.ver˃21.0˂/guava.ver˃ (in parent pom if parent─children apply) │ ˂......ver˃19.2˂/guava.ver˃ │ ··· │ ˂maven.compiler.source˃1.8˂/maven.compiler.source˃ ← Best pattern. Group plugin params in properties. │ ˂˃1.8˂/˃ │ ... │ ˂/properties˃ └ ˂dependencies˃ ˂dependency˃ ┐ ˂groupId˃˂/groupId˃│ ˂artifactId˃guava˂/artifactId˃ ├ ← Example dependency declaration. ˂version˃${guava.ver}˂/version˃ │ ˂/dependency˃ ┘ ˂dependency˃ ˂groupId˃ch.qos.logback˂/groupId˃ ← Typical dependencies for logging ˂artifactId˃logback-classic˂/artifactId˃ ┌ ˂exclusions˃ ←······················ Excluding transitive dependencies causing conflicts. (Multiple │ ˂exclusion˃ competing implementations of a same interface, ...) │ ˂groupId˃org.slf4j˂/groupId˃ ← Ex: fix runtime error: ".. path contains multiple SLF4J bindings" │ ˂artifactId˃ │ ...XXX.jar!/org/slf4j/impl/StaticLoggerBinder.class │ slf4j-jdk14 │ ...XXX.jar!/org/slf4j/impl/StaticLoggerBinder.class │ ˂/artifactId˃ ┘ │ ˂/exclusion˃ └ ˂/exclusions˃ ˂version˃1.1.7˂/version˃ ˂/dependency˃ ˂dependency˃ ˂groupId˃junit˂/groupId˃ ˂artifactId˃junit˂/artifactId˃ ˂version˃${junit.ver}˂/version˃ ˂scope˃test˂/scope˃ ← test dependency. Not included in final packaged app ˂/dependency˃ ˂build˃ ← Build customizations: ˂plugins˃ ┌ ˂plugin˃ ←········································· Ex. Plugin customization │ ˂groupId˃org.apache.maven.plugins˂/groupId˃ │ º˂artifactId˃maven─compiler─plugin˂/artifactId˃º ← plugin implementing compile 'goal' │ ˂version˃3.3˂/version˃ │ º˂configuration˃º ┐ │ ˂source˃9˂/source˃ │ ←······················· plugin tunable parameters. │ ˂target˃9˂/target˃ │ │ º˂/configuration˃º ┘ └ ˂/plugin˃ ˂/plugins˃ ˂/build˃ ˂/dependencies˃ ˂/project˃
Parent+Children Multiproject - Allows to inherit project dependency in children projects .../parent/pom.xml │ .../parent/child1/pom.xml ─────────────────────────────── │ ────────────────────────────────── ˂modelVersion˃4.0.0 │ ˂parent˃ ˂/modelVersion˃ │ ˂groupId˃...˂/groupId˃ ˂groupId˃....˂/groupId˃ │ ˂artifactId˃parent˂/artifactId˃ ˂artifactId˃parent˂/artifactId˃ │ ˂version˃1˂/version˃ ˂version˃0.1.0˂/version˃ │ ˂relativePath˃ ˂packaging˃pom˂/packaging˃ │ ../pom.xml˂/relativePath˃ │ ˂/parent˃ │ ˂dependecies˃ ˂modules˃ │ ˂dependency˃ ˂module˃./child1˂/module˃ │ ˂groupId˃...˂/groupId˃ ˂module˃./child2˂/module˃ │ ˂artifactId˃...˂/artifactId˃ ˂/modules˃ │ ˂/dependency˃... │ ˂/dependecies˃ ˂dependencyManagement˃ ← *1 ˂dependencies˃ ˂dependency˃ ˂groupId˃...˂/groupId˃ ˂artifactId˃...˂/artifactId˃ ˂version˃${dep1_ver}˂/version˃ ← NOTE: no need to repeat version in children ˂scope˃compile˂/scope˃ ← use compile by default. Let children override it. ˂/dependency˃... ˂dependencies˃ ˂/dependencyManagement˃ • best practice: Try to avoid using parent inherited properties in children. Modifyint the parent can break children. *1: best-practice: use this section to define all dependency versions, but do not set a scope here so that all dependencies have scope compile by default (or set to compile). - Use the pluginmanagement section of parent pom to define versions for ºallº plugins that your build uses, even standard maven plugins like maven-compile-plugin and maven-source-plugin. This way your build will not suddenly behave differently when a new version of a plugin is released. - When using a parent POM not located in the directory directly above the current POM define an empty relativePath element in your parent section. Install non-mavenized jar $º$ mvn install:install-file -Dfile=path_to_local_file -DgroupId=˂groupId˃ \ º $º -DartifactId=˂artifactId˃ -Dversion=˂version˃ -Dpackaging=˂packaging˃ º POM BEST PRACTICES: REF: @[] $º$ mvn versions:use-latest-versions \ º ← update pom dependency to latest version $º -Dincludes="org.checkerframework:*" º $º$ mvn versions:use-latest-versions \ º ← update pom dependency to latest version $º -Dincludes="*" º • Prefer ${project.artifactId} vs ${artifactId} or ${pom.artifactId} following XML document structure. - Use the dependency plugin to check your project for both unnecessary dependencies and undeclared-but-used-none-the-less dependencies. The goal is called ‘analyze’: $º$ mvn dependency:analyze º - Make sure the pom files contain all the repository references needed to download all dependencies. If you want to use a local repository instead of downloadin strait from the internet then use the maven settings file to define mirrors for the individual repositories that are defined in the poms. - If you use Nexus, then do not create repository groups containing both hosted and proxied repositories. This will dramaticly reduce the responsiveness because Nexus will check the remote locations of the proxied repositories even if a hosted repository contains the requested artifact. - TODO: @[]
Package Dependency Management
$º$ mvn dependency:analyze º  º  ← Inform about:
                                   - Dependencies used but not declared.
                                     If found in the parent pom, there is no problem when compiling,
                                     but must be included at runtime on the server.

                                   - Dependencies declared but not used for the scope provided
                                     (compile, provided…).  They can be in the parent pom too.
                                     Noneless, can be needed at runtime.

$º$ mvn dependency:tree -Dscope=compile º
                         skip/ignore test/provided/... dependencies
  → ...
  → [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ tamperproof ---
  → [INFO]ºcom.myCompany:myProject:jar:1.0-SNAPSHOTº
  → [INFO] +- org.web3j:core:jar:4.3.0:compile
  → [INFO] |  +- org.web3j:abi:jar:4.3.0:compile
  → [INFO] |  |  \- org.web3j:utils:jar:4.3.0:compile
  → [INFO] |  |     \- org.bouncycastle:bcprov-jdk15on:jar:1.60:compile
  → [INFO] |  +- org.web3j:crypto:jar:4.3.0:compile
  → [INFO] |  |  \- org.web3j:rlp:jar:4.3.0:compile
  → [INFO] |  +- org.web3j:tuples:jar:4.3.0:compile
  → [INFO] |  +- com.github.jnr:jnr-unixsocket:jar:0.21:compile
  → [INFO] |  |  +- com.github.jnr:jnr-ffi:jar:2.1.9:compile
  → [INFO] |  |  |  +- com.github.jnr:jffi:jar:1.2.17:compile
  → [INFO] |  |  |  +- org.ow2.asm:asm:jar:5.0.3:compile
  → [INFO] |  |  |  +- org.ow2.asm:asm-commons:jar:5.0.3:compile
  → [INFO] |  |  |  +- org.ow2.asm:asm-analysis:jar:5.0.3:compile
  → [INFO] |  |  |  +- org.ow2.asm:asm-tree:jar:5.0.3:compile
  → [INFO] |  |  |  +- org.ow2.asm:asm-util:jar:5.0.3:compile
  → [INFO] |  |  |  +- com.github.jnr:jnr-a64asm:jar:1.0.0:compile
  → [INFO] |  |  |  \- com.github.jnr:jnr-x86asm:jar:1.0.2:compile
  → [INFO] |  |  +- com.github.jnr:jnr-constants:jar:0.9.11:compile
  → [INFO] |  |  +- com.github.jnr:jnr-enxio:jar:0.19:compile
  → [INFO] |  |  \- com.github.jnr:jnr-posix:jar:3.0.47:compile
  → ...

JDepend @[] @[] - JDepend traverses Java class and source file directories and generatesºdesign-quality-metrics for each Java packageº ºin terms of its extensibility, reusability, and maintainabilityº ºto effectively manage and control package dependencies.º
Publishing to Maven Central
• External references:
  · @[]
  · @[]
  · @[/General/cryptography_map.html?id=pgp_summary]

• Requirements @[]
  Prepare pom.xml properly:
  ˂?xml version="1.0" encoding="UTF-8"?˃
  ˂project xmlns=""
    xsi:schemaLocation="... http://.../maven-v4_0_0.xsd"˃

    ˂groupId˃˂/groupId˃     ← Prepare valid coordinates
    ˂version˃1.0˂/version˃                 ← ºsnapshots NOT allowedº (Recheck)


        ˂name˃Apache Software License, Version 2.0˂/name˃

        ˂name˃First_Name Second_Name˂/name˃
        ˂organization˃Mock Corp˂/organization˃

  RºWARNº: """ we discourage the usage of ˂repositories˃ and                      ← [QA]
        ˂pluginRepositories˃ and instead publish any required
        components to the Central Repository """

•ºrequired files:º
$º$  cat artifact01-1.4.7.pom            | gpg2 -ab -o artifact01-1.4.7.pom.asc        º*2
$º$  cat artifact01-1.4.7.jar            | gpg2 -ab -o artifact01-1.4.7.jar.asc        º*2
$º$  cat artifact01-1.4.7-sources.jar *1 | gpg2 -ab -o artifact01-1.4.7-sources.jar.ascº*2
$º$  cat artifact01-1.4.7-javadoc.jar *1 | gpg2 -ab -o artifact01-1.4.7-javadoc.jar.ascº*2
         └──┬─────┘ └─┬─┘              ^               └──────────────┬───────────────┘  ^
         artifactId version            │                      GPG signatures *.asc       │
                                       │                                                 │
                                      *1: required except for pom (vs jar) packages      │
                                      *2: Verify sign. like $º$ gpg2 --verify ...asc ────┘

• Build tool integration @[]  [TODO]

  • Use approved repository hosting location:
  @[]           (for all Apache projects)
  @[] (focused on FUSE related projects)

  • User automatic publication in "forges" providing hosting services.

  •ºOSS Repository Hostingº
    · Approved repository provided by Sonatype for OOSS Projects that want to
      get their artifacts into Central Repository.
    · Open an account as explained at

  e-mail received after Namespace correct registration
  │ Thad Watson resolved OSSRH-39644: Resolution: Fixed              │
  │                                                                  │
  │ Configuration has been prepared, now you can:                    │
  │ → Deploy snapshot artifacts into repository                      │
  │   @[]     │
  │ → Deploy release artifacts into the staging repository           │
  │   @[]│
  │ → Promote staged artifacts into repository 'Releases'            │
  │ → Download snapshot and release artifacts from group             │
  │   @[]              │
  │ → Download snapshot, release and staged artifacts from           │
  │   staging group                                                  │
  │   @[]             │
  │ ºplease comment on this ticket when you promotedº                │
  │ ºyour first release, thanks                     º                │

•ºpre-deployment Tests:º
  ✓ Verify that all pom.xml files have an SCM definition.
  ✓ Diff original 'pom.xml' with 'pom.xml.tag' to check if license,...
    or any other info has been removed. This has been known
    to happen if the starting ˂project˃ tag is ºNOTº on a single
    The only things that should be different are:
    ˂version˃ and ˂scm˃ elements. Anyother must be manually updated
    from original pom.xml.

$º$ mvn deploy          º← Deploy snapshot (to be staged)

$ºmvn release:clean     º← Prepare for  release
$º$ mvn release:prepare\º← Dry run. Check output is OK.
$º  -DdryRun=true       º  -DautoVersionSubmodules=true can save time
                           in multi-module projects
$ºmvn release:prepare   º← Exec. release.  new tag will automatically
                           be created and checked in git (or svn,...)
$ºmvn release:perform   º← Stage release for a vote. release will
                           automatically be added to temp staging dir.
Dockerfile "pipeline": mvn to Container
FROM maven:3.6-jdk-12-alpine as build
WORKDIR /builder
ADD pom.xml /builder/pom.xml
ADD src /builder/src

RUN mvn install -DskipTests=true

FROM openjdk:11-jre
ARG APP_NAME_ARG=middleware-0.0.1-SNAPSHOT.jar
COPY --from=build /builder/target/$APP_NAME /app
COPY --from=build /builder/src/main/resources /app/src/main/resources

Jib: Image Builder Docker+mvn/gradle integration: @[] @[] - Build Java Container without Docker/Dockerfile - Jib's build approach separates the Java application into multiple layers, so when there are any code changes, only those changes are rebuilt, rather than the entire application. - these layers are layered on top of a distroless base image. containing only the developer's application and its runtime deps. - ºDocker build flow º │JAR│ ← (build) ← │Project│ · │Container │ ├·····→ │Build Context│ →(build)→ │ Container Image │ →(push)→ │Image │ · │ (docker cache) │ │(registry)│ │Dockerfile│ ºJib Build Flow:º │Container │ │Project│ ───────────────(Jib)─────────────────────────────────────→│Image │ │(registry)│ - Ex: Creating images from command line: Once jib is installed and added to PATH, to create a new image do something like: $º$ /opt/jib/bin/jib º $º --insecure \ º ← allow conn. to HTTP (non TLS) dev.registries $º build \ º ← build image $º --registry \ º ← Push to registry $º \ º ← Base image (busybox, nginx,,...) $º \ º ← Destination registry / image $º --entrypoint "java,-cp,/app/lib/*,\ º $º" \ º $º build/install/jib/lib,/app/lib º ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Other options include: (See jib build --help for more options) p=perms set file and directory permissions: actual use actual values in file-system fff:ddd octal file and directory (Default to 644 for files and 755 for dir.) ts=timestamp set last-modified timestamps: actual use actual values in file-system "seconds since Unix epoch" "date-time in ISO8601 format" (Default to 1970-01-01 00:00:01 UTC). -a, --arguments=arg container entrypoint's default arguments -c, --creation-time=time Set image creation time º(default: 1970-01-01T00:00:00Z)º -l, --label=key=val[,key=va l...] -p, --port=port[,port...] Expose port/type (ex: 25 or 25/tcp) -u, --user=user Set user for execution (uid or existing user id) -V, --volume=path1,path2... Configure specified paths as volumes - Ex pom.xml to create tomcat container with war: REF: $º$ mvn clean package jib:dockerBuild º $º$ docker run --rm -p 8082:8080 \ º $º registry.localhost/hello-world:latest º ˂?xml version="1.0" encoding="UTF-8"?˃ ˂project xmlns="" "" xmlns:xsi="" xsi:schemaLocation=""˃ ˂modelVersion˃4.0.0˂/modelVersion˃ ˂groupId˃org.example˂/groupId˃ ˂artifactId˃mvn-jib-example˂/artifactId˃ ˂version˃1.0˂/version˃ ˂packaging˃war˂/packaging˃ ˂properties˃ ˂˃UTF-8˂/˃ ˂failOnMissingWebXml˃false˂/failOnMissingWebXml˃ ˂/properties˃ ˂dependencies˃ ˂dependency˃ ˂groupId˃javax.servlet˂/groupId˃ ˂artifactId˃javax.servlet-api˂/artifactId˃ ˂version˃4.0.1˂/version˃ ˂scope˃provided˂/scope˃ ˂/dependency˃ ˂/dependencies˃ ˂build˃ ˂finalName˃servlet-hello-world˂/finalName˃ ˂plugins˃ ˂plugin˃ ˂groupId˃org.apache.maven.plugins˂/groupId˃ ˂artifactId˃maven-compiler-plugin˂/artifactId˃ ˂version˃3.8.1˂/version˃ ˂configuration˃ ˂source˃1.8˂/source˃ ˂target˃1.8˂/target˃ ˂/configuration˃ ˂/plugin˃ ˂plugin˃ ˂groupId˃˂/groupId˃ ˂artifactId˃jib-maven-plugin˂/artifactId˃ ˂version˃2.5.0˂/version˃ ˂configuration˃ ˂allowInsecureRegistries˃true˂/allowInsecureRegistries˃ ˂from˃ ˂image˃tomcat:9.0.36-jdk8-openjdk˂/image˃ ˂/from˃ ˂to˃ ˂image˃registry.localhost/hello-world˂/image˃ ˂auth˃ ˂username˃...˂/username˃ ˂password˃...˂/password˃ ˂/auth˃ ˂tags˃ ˂tag˃latest˂/tag˃ ˂/tags˃ ˂/to˃ ˂container˃ ˂appRoot˃/usr/local/tomcat/webapps/ROOT˂/appRoot˃ ˂/container˃ ˂extraDirectories˃ ˂paths˃ ˂path˃ ˂from˃./src/main/resources/extra-stuff˂/from˃ ˂into˃/path/in/docker/image/extra-stuff˂/into˃ ˂/path˃ ˂path˃ ˂from˃/absolute/path/to/other/stuff˂/from˃ ˂into˃/path/in/docker/image/other-stuff˂/into˃ ˂/path˃ ˂/paths˃ ˂/extraDirectories˃ ˂/configuration˃ ˂/plugin˃ ˂/plugins˃ ˂/build˃ ˂/project˃ See also: jKube [[jkube?]]
Flyway: SQL schema versioning
• tool providing version control for database (SQL) schemas
  and automated schema (tables, columns, sequences), data, views, procedures
  and packages evolution.
• Single source of truth for DDBB versioning.
• highly reliable
• Supports for many different SQL databases, including        [cloud]
  cloud ones (Amazon RDS, Azure Database, Google Cloud SQL).

Bº# HOW-TO #º
• PRESETUP: The target database to manage and user with update privileges be created first,
            "outside" of flyway.

• Database changee are called aºMigrationsº. They can be:
  · Versioned  migrations: identified by a version number, applied in order exactly once.
                           An optional revert migration version can be provided to roolback
                           changes in case of error.
  · Repeatable migrations:

• Flyway used anºinternal flyway_schema_history ddbb to keep track of migrations appliedº.

• SQL and JAVA  migrations in src/main/resources/db/migration/ are automatically      [spring]
  applied in Spring Boot when  'org.flywaydb:flyway-core' compile dependency is
  └ºV1__Initial_schema.sqlº  ← Flyway expected file name
    CREATE TABLE table01 (
     column1 BIGINT NOT NULL,
     column2 FLOAT8 NOT NULL,
     column3 INTEGER NOT NULL,
     column4 VARCHAR(255) UNIQUE NOT NULL,

    ALTER TABLE table01
      ADD COLUMN column5 VARCHAR(255);

Obevo: DDBB change manager
- Obevo: ddbb deployment tool handling enterprise scale schemas and complexity.
- By Goldman Sachs.
- Obevo is a database deployment tool that helps teams manage database
  changes in their Software Development Life Cycle (SDLC) process. In
  addition to handling production deployments, Obevo aids the
  development phase by defining a clean structure to maintain DB object
  code, and helps the testing phase with features such as in-memory
  database conversion. Notably, Obevo was designed for systems of
  enterprise scale and complexity and can manage hundreds of DB objects
  in a schema, while still handling new schemas in a simple manner.
  “We feel our ability to onboard a large and long-lived system to a
  clean SDLC process is a key differentiator in the open source
  space,” said Shant, a vice president in the Technology Division.
  “By publishing this to the open source community, we hope to aid
  others in their own DB deployment estates while growing a strong
  community around the tool.”

  """ Deploying tables for a new application?
    Or looking to improve the DB Deployment of a years-old system with
    hundreds (or thousands) of tables, views, stored procedures, and
    other objects?

    Obevo has your use case covered.

    Supported platforms: DB2, H2, HSQLDB, Microsoft SQL Server, MongoDB,
    Oracle, PostgreSQL, Redshift (from Amazon), Sybase ASE, Sybase IQ
• Alternatives to Flyway include LiquidBase, ... [TODO]
Jooq: SQL made simple

  jOOQ-CodeGenerator to create vertxified DAOs and POJOs.
  Now with JDBC, async and reactive support!

Hibernate/JPA Summary

- Hibernate Gotchas:
  hibernate, joins, and max results: a match made in hell

- Common Hibernate Exceptions Every Developer Must Know
speedment: SQL as Streams
- Stream ORM toolkit and runtime.
- The toolkit analyzes the metadata of an existing SQL database and
  automatically creates a Java representation of the data model.
- The powerful ORM enables you to create scalable and efficient Java
  applications using standard Java streams with no need to type SQL or
  use any new API.

BºSQL                                    JAVA 8 Stream Equivalentº
  FROM                                   stream()
  COUNT                                  count()
  LIMIT                                  limit()
  DISTINCT                               distinct()
  SELECT                                 map()
  WHERE                                  filter() (before collecting)
  HAVING                                 filter() (after  collecting)
  JOIN                                   flatMap()
  UNION                                  concat(s0, s1).distinct()
  ORDER BY                               sorted()
  OFFSET                                 skip()
  GROUP BY                               collect(groupingBy())
  SELECT                             ←   final
   `film_id`,`title`,`description`,      Optional˂Film˃longFilm =
   `release_year`, `language_id`,
   `original_language_id`,                .filter(
   `rental_duration`,`rental_rate`,          Film.LENGTH.greaterThan(120)
   `length`,`replacement_cost`,            )
   `rating`,`special_features`,           .findAny();
  FROM                                  BºSearches optimized in background!º
      (`length` ˃ 120)
(by Goldman Sachs)
enterprise grade (ORM) object-relational mapping framework for Java with
the following enterprise features:

- Strongly typed Bºcompile-timeº checked query language
- Bi-temporal chaining
- Transparent multi-schema support
- Full support for unit-testable code
• c3p0: easy-to-use library for making traditional JDBC drivers
        "enterprise-ready" augmenting them with jdbc3 functionality,
        , optional jdbc2 extensions and jdbc4 (v 0.9.5+).
• It provides for:
  - A class which adapt traditional DriverManager-based JDBC drivers to the
    newer javax.sql.DataSource scheme for acquiring database Connections.
  - Transparent pooling of Connection and PreparedStatements behind DataSources
    which can "wrap" around traditional drivers or arbitrary unpooled DataSources.
• The library tries hard to get the details right:
  - c3p0 DataSources are both Referenceable and Serializable, and are thus
    suitable for binding to a wide-variety of JNDI-based naming services.
  - Statement and ResultSets are carefully cleaned up when pooled Connections
    and Statements are checked in, to prevent resource- exhaustion when clients use
    the lazy but common resource-management strategy of only cleaning up their
    Connections. (Don't be naughty.)
  - The library adopts the approach defined by the JDBC 2 and 3 specification
    (even where these conflict with the library author's preferences). DataSources
    are written in the JavaBean style, offering all the required and most of the
    optional properties (as well as some non-standard ones), and no-arg
    constructors. All JDBC-defined internal interfaces are implemented
    (ConnectionPoolDataSource, PooledConnection, ConnectionEvent-generating
    Connections, etc.) You can mix c3p0 classes with compliant third-party
    implementations (although not all c3p0 features will work with external
    implementations of ConnectionPoolDataSource).
Snappy Fast de/compressor
- Java port of the snappy @[]
- Map-Like API optimized for caching.
- 1.0 drawbacks:
  - No async operations.
- Implemented by Hazelcast and others
- TODO: Patterns of JSON Matching:
  Streaming based, binding based, expression based.
- REF:JSON processing public review

- JSR 374, API for JSON Processing (JSON-P) version 1.1.
  - Java 8 streams and lambdas alternative to Gson and Jackson.
  - expected to be included in J2EE 8 .
  - compatible with JSON IETF standards.
  - It includes support for:
    - JSON Pointer
    - JSON Patch
    - JSON Merge Patch
    - Query and transformation operations
  - Designed to parse/generate/query standard JSON documents.

JSON-B / JSR-367 @[] - B stands for Ojbect binding - Standard binding layer for converting Java objects to/from JSON messages,ºdefining a default mapping algorithmº for converting existing Java classes to JSON, while enabling developers to customize it through annotations. - Real World REST APIT Example: package com.mycomp.project1; import; import; import; import; import; import; import; import; import; import; import; import; import; import; import; [] import; import org.json.JSONObject; ←··············· - reference implementation demonstrating: - how to parse JSON docs to Java objects - how to generate JSON documents from the Java objects. - Project goals include: - Adherence to the JSON spec. Bº- No external dependenciesº Bº- Fast execution and low memory footprintº - It can also convert to/from: JSON, XML, HTTP headers, Cookies, Comma Delimited Text (org.json.CDT or CSV). import java.util.Date; import java.util.Scanner; public class TestAPI˂JSONArray˃ { static String userpass = "operator1:ecllqy"; private static SSLSocketFactory sslSocketFactory = null; private JSONObject sendPost(String url, String post_body, String token) throws Exception { URL obj = new URL(url); String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes()); HttpsURLConnection con = (HttpsURLConnection) obj.openConnection(); setAcceptAllVerifier((HttpsURLConnection)con); // TODO: WARN Add certificate validation. con.setRequestMethod("POST"); //add request header con.setRequestProperty("Content-Type", "application/json"); con.setRequestProperty("Cache-Control", "no-cache"); if (token.isEmpty()) { con.setRequestProperty("Authorization", basicAuth); } else { con.setRequestProperty("Authorization", "Bearer "+token); } con.setDoOutput(true); DataOutputStream wr = new DataOutputStream(con.getOutputStream()); wr.writeBytes(post_body); wr.flush(); wr.close(); int responseCode = con.getResponseCode(); BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream())); StringBuffer response = new StringBuffer(); String inputLine; while ((inputLine = in.readLine()) != null) { response.append(inputLine); } in.close(); return new JSONObject(response.toString()); //String myJSONStr } /*ºººººººººººººººººººººººººººººººººººººººººººººººººººººººººººººººººººººººººººººº*/ /** * Overrides the SSL TrustManager and HostnameVerifier to allow * all certs and hostnames. * WARNING: This should only be used for testing, or in a "safe" (i.e. firewalled) * environment. * * @throws NoSuchAlgorithmException * @throws KeyManagementException */ protected static void setAcceptAllVerifier(HttpsURLConnection connection) throws NoSuchAlgorithmException, KeyManagementException { // Create the socket factory. // Reusing the same socket factory allows sockets to be // reused, supporting persistent connections. if( null == sslSocketFactory) { SSLContext sc = SSLContext.getInstance("SSL"); sc.init(null, ALL_TRUSTING_TRUST_MANAGER, new; sslSocketFactory = sc.getSocketFactory(); } connection.setSSLSocketFactory(sslSocketFactory); // Since we may be using a cert with a different name, we need to ignore // the hostname as well. connection.setHostnameVerifier(ALL_TRUSTING_HOSTNAME_VERIFIER); } private static final TrustManager[] ALL_TRUSTING_TRUST_MANAGER = new TrustManager[] { new X509TrustManager() { public X509Certificate[] getAcceptedIssuers() { return null; } public void checkClientTrusted(X509Certificate[] certs, String authType) {} public void checkServerTrusted(X509Certificate[] certs, String authType) {} } }; private static final HostnameVerifier ALL_TRUSTING_HOSTNAME_VERIFIER = new HostnameVerifier() { public boolean verify(String hostname, SSLSession session) { return true; } }; }
RESTAssured: REST API testing
ºFULL JOURNEY == Simulate full (REST) API in expected orderº
└ Pre-Setup:

└ Ussage Example:
  package com.mycompany.myproject.mymodule;

  import static junit.framework.TestCase.assertTrue;
  import static org.hamcrest.Matchers.*;

  import static io.restassured.RestAssured.given;

  import io.restassured.RestAssured;
  import io.restassured.config.HttpClientConfig;
  import io.restassured.path.json.JsonPath;
  import io.restassured.response.Response;
  import io.restassured.specification.RequestSpecification;
  import junit.framework.TestCase;
  import org.apache.http.client.HttpClient;
  import org.apache.http.impl.client.SystemDefaultHttpClient;
  import org.apache.http.params.HttpConnectionParams;
  import org.apache.http.params.HttpParams;
  import org.eclipse.jetty.http.HttpStatus;
  import org.junit.Ignore;
  import org.junit.Test;
  import org.junit.BeforeClass;

  import org.hamcrest.BaseMatcher;
  import org.hamcrest.Description;

  import java.util.Base64;
  import java.util.Map;

  public class FullJourneyTest {

      // Custom regex matcher for RestAssured Framework
      public static classBºRegexMatcherºextendsºBaseMatcher˂Object˃º{
          private final String regex;
          public BºRegexMatcherº(String regex){ this.regex = regex; }
         º@Overrideºpublic booleanºmatchesº(Object o){ return ((String)o).matches(regex); }
         º@Overrideºpublic voidºdescribeToº(Description description){
              description.appendText("matches regex=");
          public staticBºRegexMatcherº matches(String regex){ return newBºRegexMatcherº(regex); }

       public static classGºBase64MatcherºextendsºBaseMatcher˂Object˃º{
          public Base64Matcher(){}
         º@Overrideºpublic booleanºmatchesº(Object o){
              try {
                  return true;
              }catch (Exception e){
                  return false;

         º@Overrideºpublic voidºdescribeToº(Description description){
              description.appendText("can be parsed as Base64");

          public static Base64Matcher isBase64Encoded(){
              return new Base64Matcher();
      private static final String AUTH_HEADER_VALUE = "Bearer " + ServerConfig.apiKey;

      protected static RequestSpecification setupCommonHeaders() {
          return given().header("Authorization", AUTH_HEADER_VALUE)
                        .header("Accept"       , "application/json")
                        .header("content-type" , "application/json;charset=utf-8")

      final String
          NAME="COMMUNITY_1", SYMBOL="SY1";

      Response response;
      public static void setup() {
          RestAssured.port     = ServerConfig.serverPort;
          RestAssured.basePath = "/";
          RestAssured.baseURI  = "http://localhost";

          HttpClientConfig clientConfig = RestAssured.config().getHttpClientConfig();
          clientConfig = clientConfig.httpClientFactory(new HttpClientConfig.HttpClientFactory() {
              public HttpClient createHttpClient() {
                  HttpClient rv =  new SystemDefaultHttpClient();
                  HttpParams httpParams = rv.getParams();
                  //  Wait 5s max for a connection
                  HttpConnectionParams.setConnectionTimeout(httpParams, 5 * 1000);
                  // Default session is 60s
                  HttpConnectionParams.setSoTimeout(httpParams, 60 * 1000);
                  return rv;
          // This is necessary to ensure, that the client is reused.
          clientConfig = clientConfig.reuseHttpClientInstance();
          RestAssured.config = RestAssured.config().httpClient(clientConfig);

      public void A010_PutNewCommunityAndNewUserForPendingToMineCommunity() {
          String jsonBody =
              "{ " +
                  " \"name\": \""+NAME+"\", " +
                  " \"symbol\": \"" + SYMBOL + "\","
          response = setupCommonHeaders().body(jsonBody).when().ºpost("/Route/To/REST/API/01")º;
              /* ºget sure JSON serializer do not include extra (maybe sensitive) infOº */
              .body("size()", Oºis(5)                          º)
              .body("id"    , Oºnot(isEmptyString())           º)
              .body("pubkey", Oºnot(isEmptyString())           º)
              .body("pubkey", BºRegexMatcherºOº.matches("^{65}$")º)
              .body("pubkey", OºBase64Matcher.isBase64Encoded()º)
              .body("name"  , OºequalTo(NAME)                  º)
              .body("symbol", OºequalTo(SYMBOL)                º)
          String BºNEW_ID = response.getBody().jsonPath().get("id")º;

          // Next related test to execute synchronously after fetching NEW_ID
          String jsonBody =
              "{ " +
                  B*" \"FK_ID\": \""+NEW_ID+"\", " +*

          response = setupCommonHeaders().body(jsonBody).when().ºpost("/Route/To/REST/API/02")º;
BDD Serenity Testing

- Serenity BDD is an open source library that aims to make the idea of living
  documentation a reality.

- write cleaner and more maintainable automated acceptance and
  regression tests faster. Serenity also uses the test results to
  produce illustrated, narrative reports that document and describe
  what your application does and how it works. Serenity tells you not
  only what tests have been executed, but more importantly, what
  requirements have been tested.

- One key advantage of using Serenity BDD is that you do not have to invest time
  in building and maintaining your own automation framework.

- Serenity BDD provides strong support for different types of automated acceptance testing, including:
  - Rich built-in support for web testing with Selenium.
  - REST API testing with RestAssured.
  - Highly readable, maintainable and scalable automated testing with the
    Screenplay pattern.

- The aim of Serenity is to make it easy to quickly write well-structured,
  maintainable automated acceptance criteria, using your favourite BDD or
  conventional testing library. You can work with Behaviour-Driven-Development
  tools like Cucumber or JBehave, or simply use JUnit. You can integrate with
  requirements stored in an external source (such as JIRA or any other test cases
  management tool), or just use a simple directory-based approach to organise
  your requirements.

JBehave - framework for Behaviour-Driven Development (BDD). BDD is an evolution of test-driven development (TDD) and acceptance-test driven design, and is intended to make these practices more accessible and intuitive to newcomers and experts alike. It shifts the vocabulary from being test-based to behaviour-based, and positions itself as a design philosophy. STEP 1) Write story Scenario: A trader is alerted of status Given a stock and a threshold of 15.0 When stock is traded at 5.0 Then the alert status should be OFF When stock is traded at 16.0 Then the alert status should be ON STEP 2) Map to java STEP 3) Configure Stories STEP 4) Run Stories
Non Classified/TODO
• TODO: Create package summary
  example stack trace extracted from Vert.X :
  at jdk.internal.misc.Unsafe.park(Native Method)
  at java.util.concurrent.locks.LockSupport.park
  at java.util.concurrent.LinkedTransferQueue.awaitMatch
  at java.util.concurrent.LinkedTransferQueue.xfer
  at java.util.concurrent.LinkedTransferQueue.take
  at java.util.concurrent.ThreadPoolExecutor.getTask
  at java.util.concurrent.ThreadPoolExecutor.runWorker
  at java.util.concurrent.ThreadPoolExecutor$
JVM Journey to Cloud-native 
(by BellSoft)
• JDK 9 : Compact Strings
          HTTP/2 client
• JDK 10: Docker awareness
• JDK 11: ZGC
• JDK 12: Return unused memory
          Shenandoah GC
• JDK 13: Uncommit unused memory for ZGC
• JDK 14: JFR Event Streaming
• JDK 15: Reimplement Datagram Socker API
          Hidden Classes

• JDK 16: Elastic metaspace
         ºAlpine Linux portº

Monads in Java
• two of the most commonly known Java 8 features are monad
  implementations, namely Stream and Optional
• Monad is a concept:  we can view it as a wrapper which puts our
  value in some context and allows us to perform operations on the
  value. In this context, the output of an operation at any step is the
  input to the operation at the next step.
Simplifying Async code
• Simplifying Async/CompletableFuture code with BascomTask @ma
  BascomTask is a lightweight task orchestration library that provides
  thread-level parallelization in a way that is as frictionless as
  possible. This means that, by design, using BascomTask is very close
  to pure Java code in both usage and performance, including use of
  CompletableFutures where they can be used effectively, but without
  having to rely solely on them for task-level organization of a
  codebase. BascomTask aims to compliment rather than replace
  CompletableFutures and freely integrates with them.
JPA Summary
• Java: Migrating from JPA 2.x to 3.0
• JAVA, Soft Arch: REST Query Language with Spring Data JPA and Querydsl
  " The Holy Grail - a REST Query Language"
Amdahl's law
• Amdahl's law provides a formula to compute the theoretical maximum
  speed up by providing multiple processors to an application.
• theoretical speedup is computed by S(n) = 1 / (B + (1-B)/n) where n
  denotes the number of processors and B the fraction of the program
  that cannot be executed in parallel. When n converges against
  infinity, the term (1-B)/n converges against zero. Hence the formula
  can be reduced in this special case to 1/B. As we can see, the
  theoretical maximum speedup behaves reciprocal to the fraction that
  has to be executed serially. This means the lower this fraction is,
  the more theoretical speedup can be achieved.
• Concurrency can be greately simplified by using OS processes
  (vs in-process threads).  This automatically avoid many/most of the problems
  with thread concurrency.
• When context switching among OS process is not an issue multi-process
  architecture is a safer and easier approach.
  Note for example how all modern web browser have switched to a
  multi-process (one per-tab) approach to gain estability and safety.

• Invoquing sub-processes from java, curl example:
Vert.X Mutiny APIs
• Reactive isn't Complicated with Vert.x and the new Mutiny APIs
• higher-order functions (decorators) to enhance any functional interface,
  lambda expression or method reference with a Circuit Breaker, Rate
  Limiter, Retry or Bulkhead.
• more than one decorator on any functional interface, lambda or method ref
  can be stacked.
• JDeps: dependency analysis tool for Java bytecode (class files and JARs).

$º$ jdeps sh-2.6.3.jar º     ←  -verbose:class will list dependencies between classes
  sh-2.6.3.jar → java.base      (vs aggregating them to package level)
  sh-2.6.3.jar → java.datatransfer
  sh-2.6.3.jar → java.desktop
  sh-2.6.3.jar → java.logging
  sh-2.6.3.jar → java.prefs
  sh-2.6.3.jar → java.sql
  sh-2.6.3.jar → java.xml
  sh-2.6.3.jar → not found → com.beust.jcommander  not found →       sh-2.6.3.jar →        sh-2.6.3.jar →   sh-2.6.3.jar →       sh-2.6.3.jar →               java.base → java.lang             java.base → javax.swing           java.desktop → org.slf4j             not found
  [... truncated many more package dependencies ...]
GraalVM Summary

- Graal: How to Use the New JVM JIT Compiler in Real Life

- GraalVM Native Image
º"native-image"º utility:
 - ahead-of-time compiler to a Bºstandalone executableº.
 - JVM is replaced with necesary  components (memory mngr,
   Thread scheduler) in "Substrate VM" runtime:
   Substrate VM runtime is actually the name for the runtime components
   (like the deoptimizer, garbage collector, thread scheduling etc.).
 - Result has faster startup time and lower runtime memory .
 - It statically analyses which classes and methods are reachable
   and used during application execution and passes all this
   reachable code as the input to the GraalVM compiler for
   ahead-of-time compilation into native-library.
Ex Ussage:
  # tested with graalvm 19.3.1
  ./gradlew spotlessApply
  ./gradlew build
  ./gradlew shadowJar  // ← create fat JARs, relocate packages for apps/libs
  cd "build/libs" || exit
  native-image \
     -cp svm-1.0-SNAPSHOT-all.jar \
     org.web3j.svm.MainKt \
     --no-fallback \
     --enable-https \
Spring GraalVM issues
Working toward GraalVM native image support without requiring additional
configuration or workaround is one of the themes of upcoming Spring Framework
5.3. The main missing piece for considering GraalVM as a suitable deployment
target for Spring applications is providing custom GraalVM Feature
implementation at Spring Framework level to automatically register classes
used in the dependency mechanism or Spring factories, see the related issue #
22968 for more details.
Quarkus (GraalVM) Framework

Extracted from "Hibernate with Panache" by Emmanuel Bernard.
""" Quarkus is Supersonic Subatomic Java. extremely fast with low memory footprint""".
Hibernate ORM is the de facto JPA implementation and offers you the full
breadth of an Object Relational Mapper. It makes complex mappings possible,
but it does not make simple and common mappings trivial. Hibernate ORM with
Panache focuses on making your entities trivial and fun to write in Quarkus.

  Panache example:
  public class Person extends PanacheEntity {
      public String    name;
      public LocalDate birth;
      public Status    status;

      public staticºPerson      findByName(String name)º{
        return find("name", name).firstResult();
      public staticºList˂Person˃ findAlive           ()º{
        return list("status", Status.Alive);
      public staticºvoid       deleteStefs           ()º{
        delete("name", "Stef");
• JobRunr 4.0 Delivers Improved Integration with Spring Starter, Quarkus and Micronaut.

• Spring Batch or Quartz frameworks force to implement custom interfaces and
  they add a lot of overhead whereas I just want to run some long-running
  tasks in the background. JobRunr solves all of this by just
  accepting any Java 8 lambda, analyzing it and storing the job
  information in a SQL or NoSQL database. You can schedule these jobs
  to be executed as soon as possible, somewhere in the future or in a
  recurring manner using Cron expressions.

• Dehuysser: I would like to highlight three things:
  • JobRunr does some magic with ASM (which is also used by Spring,
    Hibernate and a lot of other frameworks) to analyze the job lambda.
    By using ASM, I really learned a lot about the JVM bytecode, which is
    not as difficult as I imagined.
  • As JobRunr performs bytecode analysis, it also participates in
    the Oracle Quality Outreach program. This means JobRunr is tested
    against upcoming releases of the JVM. This helps me to make sure that
    it will continue working on newer Java releases and also helps the
    Java community as bugs in the JVM itself are caught earlier.
  • For users that need support or extra features, there is also JobRunr Pro.
    Adding extra features like queues with different priorities (high priority
    jobs get processed before low priority jobs), job chaining, atomic batches
    and a better dashboard which adds search capabilities.
OWASP (maven) Plugin 
• OWASP Dependency-Check identifies project dependencies and checks
  if there are any known, publicly disclosed, vulnerabilities.
- Building the future of event-driven architectures.
- Open source tools to easily build and maintain your event-driven architecture.
- All powered by the AsyncAPI specification, the industry standard for defining
  asynchronous APIs.
Sign/Verify JARs

$º$ jarsigner file01.jar $keystore_alias º ← Sign Jar. use flag -sigalg ... to set sign. algorithm

$º$ jarsigner -verify file01.jar         º ← Verify jar

Non Classified/BackLog
Guava VisibleForTesting[qa]
REF: @[]
Javalin: Kiss Kotlin/Java web framework
- Inspired by Javascript KOA.js framework

- Ex: Declare server and API in the same place
  | import io.javalin.ApiBuilder.*;
  | import io.javalin.Javalin;
  | Javalin app = Javalin.create(config -˃ {
  |     config.defaultContentType = "application/json";
  |     config.addStaticFiles("/public");
  |     config.enableCorsForAllOrigins();
  | }).routes(() -˃ {
  |     path("users", () -˃ {
  |         get(UserController::getAll);
  |         post(UserController::create);
  |         path(":user-id", () -˃ {
  |             get(UserController::getOne);
  |             patch(UserController::update);
  |             delete(UserController::delete);
  |         });
  |         ws("events", userController::webSocketEvents);
  |     });
  | }).start(port);
JNR(JNI/UNIX friendly)
( used by Netty and others...)
   load native libraries without writing JNI code by hand, or using tools such as SWIG.
   jnr-unixsocket: UNIX domain sockets (AF_UNIX) for Java
   Java Native Runtime Enhanced X-platform I/O
   Pure java x86 and x86_64 assembler
  AArch64 assembler for the Java Native Runtime
  A ProcessBuilder look-alike based entirely on native POSIX APIs
 Java Platform Module System (JPMS) (1.9+) 
- JSR 379: JAVA SE 9
By Paul Deitel

- higher level of aggregation above packages.
-ºuniquely named, reusable group of related packages and resources.º

- module descriptor: (compiled version of )
  /module-info.class  ( @ module root's folder)
  - name
  - dependencies (modules)
  - packages explicitly marked as available to other modules
    (by default  implicitly unavailable / strong encapsulation)
  - services offered
  - services consumed
  - module list allowed reflection

- Rules:
  - Each module must explicitly state its dependencies.
  - provides explicit mechanism to declare dependencies between
    modules in a manner that’s recognized both at Bºcompile timeº
    and Bºexecution timeº.

- The java platform is now modularized into ~ 95 modules
$º$ java --list-modulesº ←  List modules in SE, JDK, Oracle, ...
  ( custom runtimes can be created )

BºModule Declarationsº
$º$ cat º
  module java.desktop { ← body can be empty
     requires modulename; ← 'static' flag: required just at compile time.
     requires transitive java.xml; ← if a java.desktop method returns a type
                                     from the java.xml module, code using
                                     (reading) java.desktop become dependent
                                     on java.xml. Without 'transitive' compilation
                                     will fail.
     exports ...    ← declares module’s packages whose public types
                      (and their nested public and protected types)
                      are accessible to code in all other modules.
     exports to ... ← fine grained export
     uses           ← specifies a service used by this module
                      (making our module a service consumer).
                      → modules implements/extends the interface/abstract class

     provides...with ← specifies that a module provides a service implementation

     open 'package'  ← Specifies object introspection scope
     opens ... to


º--release $version compiler flag:º                                           [troubleshooting]
   """ ... --release X is more than just a shortcut to -source X
       -target X because -source and -target are not sufficient to safely
       compile to an older release. You also need to set a -bootclasspath
       flag which must correspond to the older release (and this flag is
       often forgotten). So, in Java 9 they made a single --release flag
       which is a replacement for three flags: -source, -target and
- In UNIX, by default we authenticate against /etc/passwd, but the
  (P)lugable (A)uthentication (M)odule (PAM), allows to check against other sources.
  JAAS is similar to PAM for Java, allowing to offer a common AAA front-end to
  file/ddbb/LDAP/... backends.
Eclipse Microprofile

- launched at JavaOne 2016 to address the shortcomings in the Enterprise Java microservices space.

- MicroProfile specifies a collection of Java EE APIs and technologies which together
  form a core baseline microservice that aims to deliver application portability across multiple runtimes.

- MicroProfile 1.0 spec includes a subset of the 30+ Java Enterprise specifications:
  - JAX-RS 2.0 for RESTful endpoints
  - CDI 1.1 for extensions and dependency injection
  - JSON-P 1.0 for processing JSON messages.

- MicroProfile 1.2  (September 2017) include:
  - Configuration 1.1
  - Fault Tolerance
  - JWT
  - Metrics
  - Health Check

- MicroProfile 2.0 (Future). It is expected it will align all APIs to Java EE 8.

- vendors runtime support:
  - WebSphere Liberty IBM
  - TomEE from Tomitribe
  - Payara
  - RedHat's WildFly Swarm
  - KumuluzEE.

- Community support:
  - London Java Community
  - SOUJava
  - ...

- key code sample consists of four microservices and a front-end application.
  Vendor            |     JAR |      StartUp
                    | size/Mb | Time in Secs
  WebSphere Liberty |   35    |            7
  WildFly Swarm     |   65    |            6
  Payara            |   33    |            5
  TomEE             |   35    |            3
  KumuluzEE*        |   11    |            2

- CDI-Centric Programming Model
  - Context and Dependency Injection specification
  - Two of its most powerful features are interceptors and observers.
    - Interceptors perform cross-cutting tasks that are orthogonal to business logic
      such as auditing, logging, and security
    - The baked-in event notification model implements the observer
      pattern to provide a powerful and lightweight event notification system
      that can be leveraged system-wide.
Concurrency Classes Video
- Library/API for generating .java source files.
- Useful for things like:
  - annotation processing
  - interacting with metadata files (e.g., database schemas, protocol formats).
  - Transpiler (language A → Java Src ).
  Bºkeeping a single source of truth for the metadataº.
• Project initiated by Pivotal to provide an async/reactive alternative to JDBC.

RºWARNº: Since Spring Data R2dbc evolved very quickly, thre are plenty
         of breaking changes introduced since Spring 5.2 and Spring
         Data R2dbc 1.2.
         breaking changes (Compared to Spring Data R2dbc 1.1):
           · Spring Data R2dbc 1.1 DatabaseClient was split into two parts.
             a simple new DatabaseClient is part of Spring framework, as an
             alternative of Jdbc.
           · Another part of the old DatabaseClient is reorganized into a new
             class R2dbcEntityTemplate which acts as the role of JdbcTemplate.
    .flatMapMany ( conn -˃
       conn.createStatement ( "SELECT value FROM test" )
            .flatMap (result -˃
     row, metadata -→ row.get("value"))))

    r2dbc:                 ← ... (src/main/resources/)application.yml  example:
      username: XXX
      password: XXX
      url: r2dbc:postgresql://...:5432/ddbb_app01
        max-create-connection-time: 5s
        initial-size: 5        ←················ probably much lower numbers than
        max-size: 10                             those used for JDBC.
- Nailgun is a client, protocol, and server for running Java programs
  from the command line without incurring the JVM startup overhead.

- Programs run in the server (which is implemented in Java), and are
  triggered by the client (written in C), which handles all I/O.

9 Profiling tools
jLine: GNU/readline alike library for JAVA:
- Builtin support for console variables, scripts, custom pipes, widgets and object printing.
- Autosuggestions
- Language REPL Support

PicoCli @[] @[] Picocli is a one-file framework for creating Java user-friendly (autocompletion, subcommands, ...) command line , GraalVM-enabled applications with almost zero code. It supports a variety of command line syntax styles including POSIX, GNU, MS-DOS and more. It generates highly customizable usage help messages that use ANSI colors and styles to contrast important elements and reduce the cognitive load on the user. - 1 source file so apps can include as source and avoid adding a dependency!!!
Clig.Dev @[] · Command Line Interface Guidelines · An open-source guide to help you write better command-line programs, taking traditional UNIX principles and updating them for the modern day. · ... · Use a command-line argument parsing library where you can. Either your language’s built-in one, or a good third-party one. They will normally handle arguments, flag parsing, help text, and even spelling suggestions in a sensible way. · Note: Alternatives in other languajes: · Go: Cobra, cli · Node: oclif · Python: Click, Typer · Ruby: TTY · Rust: clap, structopt · PHP: console
A Year with Java 11 in Production!
Andrzej Grzesik talks about Revolut’s experience in running Java 11 in production for over a year. He talks about the doubts they had, some pain points and gains, as well as surprises that surprised them. He discusses tools, alternative JVM languages, and some 3rd party products.
Aviam:Light Weight JVM  ("Embedded java")
Avian is a lightweight virtual machine and class library designed to
provide a useful subset of Java’s features, suitable for building
self-contained applications.

From Mike's blog:
  | Enter Avian
  |     “Avian is a lightweight virtual machine and class library
  |    designed to provide a useful subset of Java’s features, suitable
  |    for building self-contained applications.”
  | So says the website. They aren’t joking. The example app demos
  | use of the native UI toolkit on Windows, MacOS X or Linux. It’s not
  | a trivial Hello World app at all, yet it’s a standalone
  | self-contained binary that clocks in at only one megabyte. In
  | contrast, “Hello World” in Go generates a binary that is 1.1mb in
  | size, despite doing much less.
  | Avian can get these tiny sizes because it’s fully focused on
  | doing so: it implements optimisations and features the standard
  | HotSpot JVM lacks, like the use of LZMA compression and ProGuard to
  | strip the standard libraries. Yet it still provides a garbage
  | collector and a JIT compiler.
  Experimental Reactive Relational Database Connectivity Driver, R2DBC, Announced at SpringOne

 command line tool to help you forget how to set the JAVA_HOME environment variable:

 $ jenv add /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
   oracle64- added
 $ jenv add /Library/Java/JavaVirtualMachines/jdk17011.jdk/Contents/Home
   oracle64- added

  List managed JDKs

  $ jenv versions
    * oracle64- (set by /Users/hikage/.jenv/version)

  $ jenv global oracle64- Configure global version
  $ jenv local oracle64- Configure local version (per directory)
  $ jenv shell oracle64- Configure shell instance version
SmallRye Mutiny

• SmallRye Mutiny is a reactive programming library. Wait? Another one?  Yes!
• Mutiny is designed after having experienced many issues with other
  Reactive programming libraries and having seen many developers lost
  in an endless sequence of flatMap. Mutiny takes a different approach.
  First, Mutiny does not provide as many operators as the other famous
  libraries, focusing instead on the most used operators. Furthermore,
  Mutiny provides a more guided API, which avoids having classes with
  hundreds of methods that cause trouble for even the smartest IDE.
  Finally, Mutiny has built-in converters from and to other reactive
  programing libraries, so you can always pivot.
Inmmutable Objects are faster
- One Framework to rule them all by Norman Maurer

Apache MINA:Netty Alt. • Apache MINA vs Netty: • network application framework which helps users develop high performance and high scalability network applications easily. It provides an abstract event-driven asynchronous API over various transports such as TCP/IP and UDP/IP via Java NIO. • Apache MINA is often called: - NIO framework library - client server framework library, or - a networking socket library • Apache MINA comes with many subprojects : - Asyncweb : An HTTP server build on top of MINA asynchronous framework - FtpServer : A FTP server - SSHd : A Java library supporting the SSH protocol - Vysper : An XMPP server
SwarmCache is a simple but effective distributed cache. It uses IP
multicast to efficiently communicate with any number of hosts on a
LAN. It is specifically designed for use by clustered,
database-driven web applications. Such applications typically have
many more read operations than write operations, which allows
SwarmCache to deliver the greatest performance gains. SwarmCache uses
JavaGroups internally to manage the membership and communications of
its distributed cache.

Wrappers have been written that allow SwarmCache to be used with the
Hibernate and JPOX persistence engines.
bytes java
- utility library that makes it easy to create, parse, transform,
  validate and convert byte arrays in Java
3 NIO ways to read files 
- read small file using ByteBuffer and RandomAccessFile
- FileChannel and ByteBuffer to read large files
- Example 3: Reading a file using memory-mapped files in Java
- You can use the jlink tool to assemble and optimize a set of modules
  and their dependencies into a custom runtime image

Spring vs Google Guice

Dependency injection @[] A: t's important to realize that Dagger was created after Guice, by one of Guice's creators ("Crazy Bob" Lee) after his move to Square: - Spring was originally released in October 2002. - Google originally publicly released Guice in March 2007. - JSR-330 formalized javax.inject annotations in October 2009, with heavy input from Google (Bob Lee), Spring, and other industry players. - Square originally released Dagger 1 publicly in May 2013. - Google originally released Dagger 2 publicly in April 2015. - Square marked Dagger 1 as deprecated 10 days ago, on September 15, 2016. JSR-330: Provider˂MyTargetBean˃ @[] FROM - JSR-330 standardizes annotations like @Inject and the Provider interfaces for Java platforms. - It doesn't currently specify how applications are configured, so it has no analog to Guice's modules.
Crypto.API(JCA) [TODO]
RºWARNº: Prefer a high level cryptographic API like Google Tink
         when possible.

  - See also:
    - Cryptography map:

    - Bounce Castle FIPS JCA provider

  BºPKCS#11 Ref.guideº: [TODO]
  About PCKS#11: @[http:./../General/cryptography_map.html?topics=pkcs]
Loading properties
NOTE: Probably is better to use ENV.VARs to simplify compatibility
      with container deployments.

Config properties files located in .../src/main/resources/

InputStream is = getClass().getResurceAsStream("/");
Properties props = new Properties();

└ How to add comments to properties file:
XML Stream parsing
[TODO]: Write summary of best XML libraries.
CGLIB library
- CGLIB library: Used for bytecode generation/method injection (Used by
  Spring Framework for example).
Debugger Architecture
Spring Reactor/Spring-Async
• Q:"Why Reactor when there's already RxJava2?"
   - RxJava2 is java 6 while for Reactor the Spring team decided to go all in
     and focus only on Java 8. This means that you can make use of all the new
     and fancy Java 8 features.
     - If you are going to use Spring 5, Reactor might be the better option.
     - But if you are happy with your RxJava2, there is no direct need to migrate to Reactor."""

Reactive Spring with Vert.x @[] - Reactive Spring Boot programming with Vert.x The latest bundle of Red Hat supported Spring Boot starters was recently released. In addition to supporting the popular Red Hat products for our Spring Boot customers, the Red Hat Spring Boot team was also busy creating new ones. The most recent technical preview added is a group of Eclipse Vert.x Spring Boot starters, which provide a Spring-native vocabulary for the popular JVM reactive toolkit.
Example JVM config.
Server version:        Apache Tomcat/8.x
Server built:          unknown
Server number:         8.0.x
OS Name:               Linux
OS Version:            3.10.0-1062.9.1.el7.x86_64
Architecture:          amd64
Java Home:             /ec/local/appserver/u000/app/java/jdk1.8.0_121-strong/jre
JVM Version:           1.8.0_121-b13
JVM Vendor:            Oracle Corporation
Command line argument: -Djava.util.logging.config.file=.../
Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
Command line argument: -Xms1536m
Command line argument: -Xmx1536m
Command line argument: -XX:MetaspaceSize=512m
Command line argument: -XX:MaxMetaspaceSize=512m
Command line argument: -XX:MaxDirectMemorySize=1G
Command line argument: -XX:+UseParallelGC
Command line argument: -XX:ParallelGCThreads=4
Command line argument: -XX:+UseParallelOldGC
Command line argument: -XX:LargePageSizeInBytes=4m
Command line argument: -XX:-BindGCTaskThreadsToCPUs
Command line argument: -Djava.awt.headless=true
Command line argument:
Command line argument:
Command line argument:
Command line argument: -Dfile.encoding=UTF-8
Command line argument: -XX:ErrorFile=./logs/fatal_error/hs_err_pid%p.log
Command line argument:
Command line argument:
Command line argument:
Command line argument:
Command line argument:
Command line argument:
Command line argument:
Command line argument: -verbose:gc
Command line argument: -Xloggc:....API_TEST-gc.log
Command line argument: -XX:+PrintGCDetails
Command line argument: -XX:+PrintGCTimeStamps
Command line argument: -XX:+PrintTenuringDistribution
Command line argument: -XX:+PrintGCApplicationConcurrentTime
Command line argument: -XX:+PrintGCApplicationStoppedTime
Command line argument: -Djava.endorsed.dirs=.../tomcat8/endorsed
Command line argument: -Dcatalina.base=...
Command line argument: -Dcatalina.home=...
Command line argument:
mvn: Default Wrapper in 3.7

Java on VC.Studio
Vert.x 3.9 Fluent API Query
Red Hat build of Eclipse Vert.x 3.9 brings Fluent API Query
- You use the jhsdb tool to attach to a Java process or to a core dump
  from a crashed Java Virtual Machine (JVM).

- jhsdb is a Serviceability Agent (SA) tool. Serviceability Agent (SA)
  is a JDK component used to provide snapshot debugging, performance
  analysis and to get an in-depth understanding of the Hotspot JVM and
  the Java application executed by the Hotspot JVM.

- Even though native debuggers like gdb are available for examining the
  JVM, unlike jhsdb, these native debuggers do not have an inbuilt
  understanding of the data structures in Hotspot and hence, are not
  able to throw insights on the Java application being executed. jhsdb
  knows about the locations and address ranges of crucial components of
  the JVM like the Java heap, heap generations, regions, code cache,
GraalVM Native Image

| FROM oracle/graalvm-ce:20.0.0-java11 as builder
| WORKDIR /app
| COPY . /app
| RUN gu install native-image
| # Build the app (via Maven, Gradle, etc) and create the native image
| FROM scratch
| COPY --from=builder /app/target/my-native-image /my-native-image
| ENTRYPOINT ["/my-native-image"]

- to build a statically linked native image:

  ...Luckily GraalVM has a way to also include the necessary system
  libraries in the static native image with musl libc:
  - In your Dockerfile download the musl bundle for GraalVM:

| RUN curl -L -o musl.tar.gz \
| ⅋⅋ \
|     tar -xvzf musl.tar.gz

  And then add a native-image parameter that points to the extracted location of the bundle, like:


  Now your native image will include the standard library system calls that are needed!

- If AOT thing fails, it will fallback to just running the app in the JVM.
  To avoid it running on the JVM:

- FAIL-FAST: Don't Defer Problems to Runtime
  - make sure native-image is NOT being run with any of these params:

- Reflection Woes:
  - reflection happens at runtime, making it hard for an AOT complier.
  - you can tell GraalVM about what needs reflection access,
    but this can quickly get a bit out-of-hand, hard to derive and maintain.
  - Micronaut and Quarkus do a pretty good job generating the reflection
    configuration at compile time but you might need to augment the
    generated config. (tricky with shaded transitive dependencies).

  - To reliably generate a reflection config you need to exercise as many
    execution code paths as possible, ideally by running unit/integration tests.
  - GraalVM has a way to keep track of reflection and output the configuration.
    - Run the app on GraalVM and use a special Java agent that will be able to
      see the reflective calls.
      - grab GraalVM Community Edition:
      - set JAVA_HOME and PATH.
      - from release assets grab the right native-image-installable-svm-BLAH.jar file
        and extract it in the root of your GraalVM JAVA_HOME directory.
      - run tests with parameter:
        (This will generate the reflection config (and possibly other configs for
         dynamic proxies, etc).
      - tell native-image about those configs, like:

   - For Quarkus ⅋ Micronaut see their docs (Quarkus / Micronaut) for details on
     how to add your own reflection config files.
Async Servlets 3.0+:
Real-World Java 9

• Real-World Java 9:
  Trisha Gee shows via live coding how we can use the new Flow API to
  utilize Reactive Programming, how the improvements to the Streams API
  make it easier to control real-time streaming data and how the
  Collections convenience methods simplify code. She talks about other
  Java 9 features, including some of the additions to interfaces and
  changes to deprecation.
- Three of the new classes introduced in JDK 8 are
  DoubleSummaryStatistics, IntSummaryStatistics,
  andLongSummaryStatistics of the java.util package. These classes make
  quick and easy work of calculating total number of elements, minimum
  value of elements, maximum value of elements, average value of
  elements, and the sum of elements in a collection of doubles,
  integers, or longs. Each class's class-level Javadoc documentation
  begins with the same single sentence that succinctly articulates
  this, describing each as "A state object for collecting statistics
  such as count, min, max, sum, and average."
Deplo WAR to k8s with JKube

• JKube Maven  converts war (dependent of a container) into cloud-native app.

  - pom.xml:
    ˂!-- ... --˃
    ˂!-- ... --˃
      ˂failOnMissingWebXml˃false˂/failOnMissingWebXml˃  ← configure maven-war-plugin so
                                                          that it won't fail due
      ˂!-- ... --˃                                        to a missing web.xml file.

      configure JKube to create service-resource manifest using NodePort as the spec.type.

          ˂artifactId˃kubernetes-maven-plugin˂/artifactId˃ ← Alt: openshift-maven-plugin.
        ˂!-- ... --˃

• example project contains three Java classes:
  └ ExampleInitializer: replaces standard WEB-INF/web.xml
    registering Spring's DispatcherServlet directly.

       final AnnotationConfigWebApplicationContext context
              = new AnnotationConfigWebApplicationContext();
       final ServletRegistration.Dynamic dsr
              = servletContext.addServlet("dispatcher",
                new DispatcherServlet(context));

  └ ExampleConfiguration: Spring-specific config enabling Spring MVC.

  └ ExampleResource: standard Spring @RestController.

- Deploy to Kubernetes:
    $ mvn clean package    ← generate war in target/
   º$ mvn k8s:build    º   ← Build OCI image (webapp/example:latest)
                             using jkube/jkube-tomcat9-binary-s2i by default.
                             Alternatives like Jetty can be used
   º$ mvn k8s:resource º   ← create required cluster config resource manifests
   º$ mvn k8s:apply    º   ← apply to (kubectl configured) cluster
    $ kubectl get pod      ← Verify that app is running
    $ mvn k8s:log          ← Retrieve app Logs
Running *.java (java 11+)
@[] by Mohamed Taman
  public class HelloUniverse{
      public static void main(String[] args){
        if ( args == null || args.length != 1 ){
           throw RuntimeException("Name required");
        System.out.printf("Hello, %s to InfoQ Universe!! %n", args[0]);
                                    Before Java 11:
  $º$ javac ./   º  ← compile (./HelloUniverse.class generated)
  $º$ java  ./HelloUniverse  arg1  º  ← start up JVM, load class, execute code

                                    After Java 11 (JEP 330)
  $º$ java arg1 º  ← compile, start up JVM, load class, execute code
                                        (no ./*.class generated on disk)
                                        Use $ºjava --source $version º
                                        when .java extension is not available or to
                                        specify the language version of the source code.
                                      - By default compiled code is part of an unnamed module.
                                        Add flag --add-modules=ALL-DEFAULT to have access to
                                        standard modules in JDK.

  - Run *.class file         ← Java launcher 1.0+
  - Run JAR's    main class  ← Java launcher 1.0+
  - Run Module's main class  ← Java launcher 9+
  - Run class in ← Java launcher 11+

  - Integrate with UNIX scripting using the old beloved Shebang notation:
  $º$ cat º
    #!/path/to/java --source $version
    public class HelloUniverse{
        public static void main(String[] args){

JBang: Simplified Java @[]
Today we’re announcing a new beta release of Conclave, a platform
that makes it easy to use secure hardware enclaves with Java. You can
use enclaves to:
- Solve complex multi-party data problems, by running programs on a
  computer that prevents the hardware owner from seeing the
- Protect sensitive data from the cloud.
- Make your hosted service auditable and trustworthy.
- Upgrade privacy on distributed ledger platforms like Corda.
Checkpointing outside the JVM
When OpenJDK‘s Java virtual machine (JVM) runs a Java application,
it loads a dozen or so classes before it starts the main class.

  ----------------------------       ---------------------------
 - runs method several hundred       - 1st  run : WARM Up once and checkpoint
   times before optimizing it.       - Next runs: restore checkpointed app.
  cost of long startup times.        BºStart time seconds → millisecs!!!º

BºCheckpoint inside JVM HOW-TO:º [TODO]

BºCheckpoint Outside JVM HOW-TO:º
        - Under the hood it uses Linux Checkpoint/Restore in Userspace (CRIU).

  $ºCONSOLE 1                              CONSOLE 2º
  $º$ setsid java -XX:-UsePerfData \º    $º$ sudo criu dump -t $pid \  º ← stops and checkpoint
  $º    -XX:+UseSerialGC Scooby     º    $º   --shell-job -o dump.log  º   app

                                         $º$ sudo restore --shell-job \º ← Restore app
                                         $º   -d -vvv -o restore.log   º
- JUnit extension forºasserting JDK Flight Recorder eventsº
 ºemitted by an application identifying performance regressionsº
 º(e.g. increased latencies, reduced throughput).º

- JfrUnit supports assertions not on metrics like latency/throughput
  themselves, but on indirect metrics which may impact those.
  - memory allocation,
  - database IO
  - number of executed SQL statements
  - ...

- JfrUnit provide means of identifying and analysizing such issues in
  a reliable, environment independent way in standard JUnit tests,
  before they manifest as performance regressions in production.
Jrpip: Low code RMI
BºLet developers concentrate on logic vs network plumbing and TCP errors.º
- Jrpip mplements remote interface method invocation which allows different
  Java processes to interact with one another.
- designed to increase developer efficiency by providing useful features, such as
  the ability to deploy in any servlet container, interface
  implementation that doesn’t require RemoteException and automatic
  retries with once-execute semantics.
- efficient binary protocol that is streamed, reducing memory
  consumption and garbage collector issues.

- See also notes on gRPC:
- (By Goldman Sasch) JUnit rule that adds table verification to unit
  tests.ºFor software products that produce large amounts of data,ºthis
  tool can help to create automated tests that are both comprehensive
  and easy to maintain. Users only need to adapt their existing data to
  a table format that Tablasco understands. “Every test produces a
  color-coded HTML break report, which helps users quickly identify the
  issue,” said Barry, a vice president in the Technology Division.
  Furthermore, Tablasco features automated baseline management,
  allowing users to easily update the baseline file of a failing test.

- JUnit rule for comparing tables and Spark module for comparing large data sets.
SMTP: Sending mail
- Based on AWS guide @[]
  but applies to any standard SMTP e-mail server.

  - pom coordinates:

  import java.util.Properties;

  import javax.mail.Message;
  import javax.mail.Session;
  import javax.mail.Transport;
  import javax.mail.internet.InternetAddress;
  import javax.mail.internet.MimeMessage;

  public class SendMailThroughSMTP {

      static final String
          FROM     = "...", // ← in AWS this address must be verified
          FROMNAME = "...", // ← Human readable FROM
          TO       = "...", // ← in AWS if sending account is still in the
                            //   sandbox this address must be ºcase-sensitiveº verified.

          CONFIGSET = "ConfigSet", // ← Configuration Set name used for this message.
                                   //  If commented out remove header below.  º*1º
          BODY      = String.join( "\n",
              "˂h1˃header˂/h1˃", "html line2", "html line3", "..." ),
          SMTP_HOST = "",
          SMTP_USERNAME = "..." , // º*2º
          SMTP_PASSWORD = "...";
          // SMTP_HOST REF:

      static final int PORT = 587; // The port you will connect to on the Amazon SES SMTP endpoint.

      public static void main(String[] args) throws Exception {
          final Properties props = System.getProperties();
          props.put("mail.smtp.port"           , PORT  );
          props.put("mail.transport.protocol"  , "smtp");
          props.put("mail.smtp.starttls.enable", "true");
          props.put("mail.smtp.auth"           , "true");
          Session session = Session.getDefaultInstance(props); // ← represents mail session

          final MimeMessage
              msg = new MimeMessage(session);    // ← Build message
              msg.setFrom(new InternetAddress(FROM,FROMNAME));
                     new InternetAddress(TO));
              msg.setHeader(                     // Remove if not using a configuration set

          final Transport transport =            // ← Create transport

          try { // Send the message.
              transport.connect(SMTP_HOST, SMTP_USERNAME, SMTP_PASSWORD); // Connect to SMTP using username/pass
              transport.sendMessage(msg, msg.getAllRecipients());    // Send email.
          } catch (Exception ex) {
              // process and rethrow
          finally { transport.close(); }

  º*1º: More info at

  º*2º: AWS note, SMTP credentials are different to AWS credentials.
        SMTP username credential is 20-chars (letters and numbers)
· JFleet is a Java library whichºpersist in database large collections º
 ºof Java POJOs as fast as possible, using the best available techniqueº
 ºin each database provider,ºachieving it with alternate persistence
  methods from each JDBC driver implementation.

· Its goal is to store a large amount of information in a single table
  using available batch persistence techniques.

· despite being able to use JPA annotations to map Java objects to
  tables and columns,ºJFleet is not an ORM.º
JVM Anatomy
Java Class Library (JCL, rt.jar)
rt.jar contains the Java Class Library (JCL)

- The Java Class Library (JCL) is a set of dynamically loadable
  libraries that Java applications can call at run time. Because the
  Java Platform is not dependent on a specific operating system,
  applications cannot rely on any of the platform-native libraries.
  Instead, the Java Platform provides a comprehensive set of standard
  class libraries, containing the functions common to modern operating

- Java Class Library (JCL) is almost entirely written in Java, except
  for the parts that need direct access to the hardware and operating
  system (such as for I/O or bitmap graphics). The classes that give
  access to these functions commonly use Java Native Interface wrappers
  to access operating system APIs.

- The Java Class Library (rt.jar) is located in the default bootstrap
  classpath[1] and does not have to appear in the classpath declared
  for the application. The runtime uses the bootstrap class loader to
  find the JCL.

BºThe Java Module System (part of the Java 9 release) broke the º
Bºmonolithic "rt.jar" JAR file and modularized the JCL itself inº
Bºseveral modules with specified dependencies.º
Java Threading resources

- Java Thread, Concurrency and Multithreading Tutorial
- Java Threads and Concurrent Locks with Examples
- Java Thread Deadlock Example and Thread Dump Analysis using VisualVM
- Java Thread Starvation and Livelock with Examples
- Examining Volatile Keyword with Java Threads
- Java Threads Wait, Notify and NotifyAll Example
- Jimfs supports almost all the APIs under java.nio.file:
  - Create/delete/move/Copy  files/dirs.
  - Read/write files with FileChannel/SeekableByteChannel/InputStream/OutputStream/...
  - Symbolic links.
  - Hard links to regular files.
  - SecureDirectoryStream, for operations relative to an open directory.
  - Glob and regex path filtering with PathMatcher.
  - Watching for changes to a directory with a WatchService.
  - Built-in (file) attribute views that can be supported include
    "basic", "owner", "posix", "unix", "dos", "acl" and "user".
- Simple ussage:

  // For a simple file system with Unix-style paths and behavior:
  final FileSystem fs = Jimfs.newFileSystem(Configuration.unix());
  final Path foo = fs.getPath("/foo");
  final Path hello = foo.resolve("hello.txt");
  Files.write(hello, ImmutableList.of("hello world"), StandardCharsets.UTF_8);

Java Poet

• Java API for generating .java source files.
• useful for:
  - transpiling: Custom language to java.
  - annotation processing
  - interacting with metadata files (database schemas, protocol formats,...).

• Avoid boilerplate while also keeping a ºsingle source of truthº.
Storing HttpSessions to Redis
AmadeusITGroup/HttpSessionReplacer: Store JEE Servlet HttpSessions in Redis
Dekorate k8s annotations
• ex:
     name = "hello-world-fwless-k8s",
     ports = @Port(name = "web", containerPort = 8080),
     expose = true,
     host = "",
     imagePullPolicy = ImagePullPolicy.Always

  Kubernetes Output:
  · target/classes/META-INF/dekorate/kubernetes.yml
    Use like:
  $º$ kubectl create ns demo                  º
  $º$ kubectl apply -f kubernetes.yml -n demo º
  · target/classes/META-INF/dekorate/kubernetes.json

• Integration with Jib to generate OCI images is also available.

Servlet servers Arch
embedded servers usually comprise two logical components:

│  LOGICAL COMPONENT                │  TOMCAT equivalent
│  a web server component           │  Coyote
│  listening for HTTP requests and  │
│  returning HTTP responses         │
│  an execution context to make     │  Catalina, based on Servlet API
│  a Java web application interact  │  (usually called the Servlet
│  with the web server.             │  Container)
Yourkit profiler
• Yourkit: commercial (non OOSS) profiler with advanced features
• Tight integration with your IDE
• "Smart what if"  allows to evaluate performance gains of supposed optimizations
  without re-profiling the application.
• CPU call tree
• Flame graphs
• Database queries and web requests:
  - display slow SQL queries and web requests.
  (support for MongoDB, Cassandra, HBase,...)

• Memory profiling: object heap, traversing of object graph.
  The Profiler chooses the best way to show you a content of a HashMap, String, ... ).
  For each object you can see how much memory it retains, and what happens when
  particular reference in object graph does not exist.
 ºThis saves time, and lets you estimate the expected memory leak fix without changing the code.º

• Memory profiling:

• 40+ comprehensive inspections are waiting to make your code faster and more efficient.

• Profiler knows a lot about typical issues in Java applications and automatically finds them.

• Report inefficient collections and I/O operations.

• Find/Resolve thread synchronization issues.
  It is possible to combine thread states with HTTP requests and SQL queries to get
  the full picture how the requests are processed by your applications.

• Exception profiling: Massive exception throwing is a common but often hidden
  performance problem.

•ºDeobfuscate the code on the fly restoring original class, method and field names of º
 ºapplications obfuscated with ProGuard, yGuard, Zelix KlassMaster, Allatori, and other º
 ºpopular Java obfuscators.º

• control profiling overhead up to production profiling.

• Extensible API to create custom probes.

• Command line support. (UI free).

• free licenses for non-commercial open source projects.
  Special offers for educational and scientific organizations.
Gradle Summary
•º# Graddle Wrapper #º
REF: @[]
  - recommended way to execute any Gradle build
  - invokes gradle with a declared version (vs randomnly installed one in OS).
    (robust builds)
  - invokes with a declared version of Gradle.

  • Workflow:
  - set up a new Gradle project
  - add Wrapper to new project
    (a gradle runtime must be instaled)
    $ gradle wrapper \
      --gradle-version 5.1 \      optional
      --distribution-type bin  \  optional
      --gradle-distribution-url ...  optional
      --gradle-distribution-sha256-sum  ... optional

    (SHA256 hash sum used for verifying downloaded Gradle distribution)

    → Task :wrapper
    → 1 actionable task: 1 executed
    ├── build.gradle
    ├── settings.gradle
    ├──ºgradle                           º←  generated dir. to be added to git
    │  º└── wrapper                      º
    │  º    ├── gradle-wrapper.jar       º ← code for downloading the distro
    │  º    └── gradle-wrapper.propertiesº
    ├──ºgradlew    º   ← once generated, use it like $ ./gradlew build
    gradle/wrapper/  is generated
    with the information about the Gradle distribution:
     - server hosting the Gradle dist. Ex:
     - type of Gradle dist.
       (default to -bin dist with only runtime -no sample code,docs,...)
     - The Gradle version used for executing the build.
       (default to local installed one)

  - Check generated wrapper files to "git",
    including the (small) jar files.
  - run a project with provided Wrapper
  - upgrade the Wrapper to new Gradle version when desired.

  - ºCustomizing the wrapperº
    - built-in wrapper task exposes numerous options
    to bend the runtime behavior to your needs.
    build.tasks.wrapper {
      distributionType = Wrapper.DistributionType.ALL

  - HTTP Basic Authentication (RºWARNº: use only with TLS connections)
    alt 1: ENV.VARS:
    alt 2: gradle/wrapper/

  - ºVerifying downloadº

• Multi-module Deployer:
  (Java Example project available in github)
  - library built to speed up deployment of microservice based applications.
  - build and run each application module.
  - configure deployment dependencies between modules
    by just creating and running a simple application.

  └ Installation
    1) Add to your build.gradle the following function:
     def downloadLibFromUrl(String libSaveDir, String libName, String libUrl) {
         def folder = new File(libSaveDir)
         if (!folder.exists()) {
         def file = new File("$libSaveDir/$libName")
         if (!file.exists()) {
             ant.get(src: libUrl, dest: file)
         getDependencies().add('compile', fileTree(dir: libSaveDir, include: libName))

    2) the following code to your dependencies declaration:
     dependencies {
         /* ... */
         def libSaveDir = "${['user.home']}/.gradle/caches/modules-2/files-2.1"
         def version = '1.1.1'
         def libName = "multi-module-deployer-${version}.jar"
         def url = "$version/$libName"
         downloadLibFromUrl(libSaveDir, libName, url)

  └ Usage example

    import multi.module.deployer.MultiModuleDeployer;
    import multi.module.deployer.moduleconfig.ModuleConfig;
    import multi.module.deployer.moduleconfig.ModuleConfigFactory;

    public class App {
      public static void main(String[] args) {
          MultiModuleDeployer multiModuleDeployer = new MultiModuleDeployer();
          // commands to run the first module
          String   linuxCmd = "linux commands to deploy first module";
          String windowsCmd = "windows commands to deploy first module";
          ModuleConfig firstModuleConfig =
            ModuleConfigFactory.httpModuleConfig(linuxCmd, windowsCmd, 8080, "localhost", "/api/...");
          // adds the first configuration to the deployment list

          // commands to run the second module
          linuxCmd = "linux commands to deploy second module";
          windowsCmd = "windows commands to deploy second module";
          ModuleConfig secondModuleConfig = ModuleConfigFactory.httpModuleConfig(linuxCmd, windowsCmd, 3000, "localhost", "/api/...");

          // adds the second configuration to the deployment list
          // it will be started only after the first one is "ended"

          // deploys the modules

• What's New
- Gradle v6:
LMAX Disruptor: High Perf Inter-Thread Messaging Library

See also:

LMAX Exchange Getting Up To 50% Improvement in Latency From Azul's Zing JVM
Interesting points about GC tunning.
Tribe: reliable multicast
REF: @[]
- Unlike JGroups, Tribe only targets reliable multicast
 (no probabilistic delivery) and is optimized for cluster

JGroups multicast @[] - toolkit for reliable multicast communication. - point-to-point FIFO communication channels (basically TCP) - Targets high performance cluster environments. - Unlike JGroups, Tribe only targets reliable multicast (no probabilistic delivery) and is optimized for cluster communications.
- @[]
  versatile modular unikernel designed to run unmodified Linux
  applications securely on micro-VMs in the cloud. Built from the ground up for
  effortless deployment and management of micro-services and serverless apps,
  with superior performance. (Includes CRaSH shell)