⏿ You can also be interested in the Web Technologies and Cryptography maps.☜
External  Links
-Lang&VM specs
-Adpot OpenJDK prebuilt binaries
- @[https://wiki.openjdk.java.net/]
- @[https://openjdk.java.net/]

-(Active) Java JVM List
-Eclipe Tools for [QA]
-Excellent Java Blog (spanish)
-JAVA Enhancements proposals
-Douglas Craigs Schmidt Java Lessons:
-Awesome Java

-@[https://java-source.net/] Collections of Production-ready software developed with Java,
   from DDBBs, caches, servers, ...

Bibliography - Effective Java 3rd Edition, Joshua Bloch ISBN-10: 0134685997 ISBN-13: 978-0134685991 - Java Performance: The Definitive Guide, by Scott Oaks ISBN-10: 1449358454 ISBN-13: 978-1449358457
TOP Mistakes/Classes/FAQs/...
(Forcibly incomplete but still quite pertinent list of core developers and companies)
- James Arthur Gosling:  Founder and lead-designer of the Java Programming Language

- Joshua J. Bloch:
  - Author of the book "Effective Java" (a must read)
    and co-author of two other books:
    - Java Puzzlers (2005)
    - Java Concurrency in Practice (2006)
  - Led the design and implementation of numerous
    Java platform features, including the
    Java Collections Framework, the java.math package,
    and the assert mechanism.

- Julien Viet:
  Core developer of VertX, crash (http://www.crashub.org/),
  and many other interesting Java software.

- Ben Evans:
  - jClarity Co-founder.
  - Java Champion, author, speaker, consultant
  - voting member on Java’s governing
  - Author of 5 books
    - “The Well-Grounded Java Developer”,
    - new editions of “Java in a Nutshell”,
    - “Java: The Legend” and “Optimizing Java”.
    - Track lead for Java / JVM at InfoQ.
  ... I will explain how we might start to implement a JVM from scratch.. then
   we will show how the Rust programming language provides a good alternative
   implementation language for our simple JVM. We will showcase some basic Rust
   language features and show how they can be used to provide a version of our
   JVM that is much cleaner and easier to understand, even if you've never
   seen Rust code before!"""

- Emmanuel Bernard: Distinguished Engineer and Chief Architect Data at
  Red Hat (middleware). His work is Open Source. He is most well known for his
  contributions and lead of the Hibernate projects as well as his contribution
  to Java standards. His most recent endeavour is Quarkus (A Kubernetes Native
  Java stack tailored for GraalVM and OpenJDK HotSpot, crafted from the best of
  breed Java libraries and standards).

- Chad Arimura, vice president of Java Developer Relations at Oracle.

- https://community.oracle.com/community/groundbreakers/java/java-champions
- https://blogs.oracle.com/java/new-java-champions-in-2017
What's new
v.15 2020-09-15

- BºRECORDS!!!: non-verbose inmutable classes.º
    Alt 1: short writing
   ºrecord Personº(String name, int age) { }
    Alt 2: Validating constructor
   ºrecord Personº(String name, int age) {
      Person {           ←  (optional) constructor can NOT compute state, only validate|throw
        if (age ˂ 0)
          throw new IllegalArgumentException("Too young");

    var john = newºPersonº("john", 76);

    - can NOT have any additional (internally computed) private or public instance fields.
    - can NOT extend classes.
    - are ALWAYS FINAL (cannot be extended), 

- production ready ZGC low-latency garbage collector. 
  "...Oracle expects ZGC to be quite impactful for a multitude of workloads,
   providing a strong garbage collection option for developers..."

- text block:(JEP 378): make it easy to express strings spanning several lines
  ("templates, ...") 

- JEP 360: Sealed Classes (Preview)
  Avoid to extend class not designed to be extended.
  (control how class is used by third parties)

- JEP 383: Foreign Memory Access API: (Preview)
  - access "foreign"(outside Java HEAP) memory.
    Part of Project Panama, trying a better connection with native  (C/Assembler) code.

v.14 2020-03-??
 └ More container awareness...
   - NUMA container support added to hotspot (JDK-8198715)
   - Add Container MBean to JMX (JDK-8199944)

 └BºRecord-types in Java 14:º
   Records aim to enhance the language's ability to model 
 Bº"plain data" aggregates with less ceremony.º

 └ Shenandoah GC
   Shenandoah GC in JDK 14, Part 1: Self-fixing barriers
   By Roman Kennke March 4, 2020
   The development of the Shenandoah Garbage Collector (GC) in the
   upcoming JDK 14 has seen significant improvements. The first one
   covered here (self-fixing barriers) aims to reduce local latencies
   that are spent in barrier mid- and slow paths. The second will cover
   concurrent root processing and concurrent class unloading.
   This article discusses concurrent roots processing and concurrent
   class unloading, both of which aim to reduce GC pause time by moving
   GC work from the pause to a concurrent phase.

v 12,13 @[https://www.infoq.com/news/2019/06/java13-feature-freeze/] @[https://developers.redhat.com/blog/2019/06/27/shenandoah-gc-in-jdk-13-part-1-load-reference-barriers/] @[https://developers.redhat.com/blog/2019/06/28/shenandoah-gc-in-jdk-13-part-2-eliminating-the-forward-pointer-word/] @[https://developers.redhat.com/blog/2019/07/01/shenandoah-gc-in-jdk-13-part-3-architectures-and-operating-systems/] └ More container awareness... - add container support to jhsdb command (JDK-8205992) - Flight Recorder impovements for containers (JDK-8203359) - Improve containe support when Join Contolles option is used (JDK-8217766) - Impove systemd slice memory limit suppot (JDK-8217338) - JFR jdk.CPUInformation event epots incorrect info. when running in Docker container (JDK-8219999)
v.11(LTS) 2018/09 @[https://www.journaldev.com/24601/java-11-features] @[https://www.infoq.com/news/2018/09/java11-released] - More container awaeness... - Remove -XX:+UnlockExperimentalVMOptions, -XX:+UseGroupMemoryLimitFoHeap (JDK-8194086) - jcmd -l and jps commands do not list JVMs in Docker containers (JDK-8193710) - Container Metrics (-XshowSettings:system) (JDK-8204107) - Update CPU count algorithm when both cpu shares and quotas are used (JDK-8197867) --XX:+PrefectContainerQuotaForCPUCount New major features: - Autocompilation(JEP 330). Next code will execute: $ java someFile.java - New string methods: - isBlank(): true for Empty or only white spaces strings - lines() : returns string[] that collects all substrings split by lines. System.out.println( "JD\nJD\nJD".str.lines().collect(Collectors.toList()) ); - strip() : similar to trim() but unicode-aware stripLeading() stripTrailing() - repeat(int n) : repeats string n times. - Local-Variable Syntax for Lambda Parameters (JEP 323) (var s1, var s2) -˃ s1 + s2 - While it's possible to just skip the type in the lambda it becomes a need when for annotations like @Nullable - Nested Based Access Control (fix some issues when using (discouraged-)reflection. - Dynamic Class-File Constants(JEP 309) - class-file format now extends support a new constant pool form: -ºCONSTANT_Dynamicº, reduce the cost and disruption of developing new forms of materializable class-file constraints. - Epsilon: A No-Op Garbage Collector(JEP 318): - Experimental - Unlike the JVM GC which is responsible for allocating memory and releasing it, Epsilon only allocates memory. Useful for: -ºExtremely short lived jobsº - Performance testing - Memory pressure testing - VM interface testing - Last-drop latency improvements - Last-drop throughput improvements - Remove the JavaEE and CORBA Modules(JEP 320): java.xml.ws, java.xml.bind, java.activation, java.xml.ws.annotation, java.corba, java.transaction, java.se.ee, jdk.xml.ws, jdk.xml.bind RºWARNº: EE modules contain the support for JAXB and SOAP, still in relatively widespread use. - Check carefully whether build scripts need to be modified. - Flight Recorder(JEP 328) - profiling tool gathering diagnostics and profiling data - negligible performance overhead (˂1%): -ºCan be used in productionº - HTTP Client (JEP 321) - HTTP/1.1,ºHTTP/2 and WebSocketsº - Designed to improve overall performance of sending requests by a client and receiving responses from the server. - TLS 1.3 - Convenient Reading/Writing Strings to/from Files - readString() - writeString() Path path = Filesº.writeStringº( Files.createTempFile("test", ".txt"), "This was posted on JD"); System.out.println(path); String s = Filesº.readStringº(path); System.out.println(s); //This was posted on JD - ChaCha20,Poly1305 Crypto (JEP 329) - implemented in the SunJCE provider. - Improve (string and array)Aarch64 processor Intrinsics(JEP 315) - implement also new intrinsics for (java.lang.Math) sin, cos, and log functions. - ZGC:(JEP 333) - Scalable Low-Latency Garbage Collector - Experimental - low latency GC. - sub-10ms pause times, less than 15% perf.penalty. - Deprecate Nashorn JS Engine(JEP 335)
v.10 (2018/03) - More container awareness... - Improve heap memory allocations (JDK-8196595): o --XX:InitialRAMPercentage, --X:MaxRAMPercentage and -XX:MinRAMPercentage --XX:InitialRAMFraction , --X:MaxRAMFraction and -XX:MinRAMFraction Rºdeprecatedº - Total number of CPUs available to the Java Process calculated from --cpus, --cpu-shares, --cpu-quota (JDK-8146115) o Use --XX:-UseContainerSupport to return to the old behaviour o # processors that the HVM will use internally -XX:ActiveProcessorCount - Attach in linux became elative to /proc/pid/root and namespace aware (jcmd, jstack,...) - Read also: https://aboullaite.me/docker-java-10/ JVMs before 10 had been implemented before cgroups, hence not optimized for executing inside a container. - Application Data-Class Sharing (JEP ???) - extends existing Class-Data Sharing ("CDS") for allowing application classes to be placed in the shared archive in order to improve startup and footprint. - Parallel Full GC for G1 - improves G1 worst-case latencies - Garbage Collector Interface - improves source code isolation of different GCs. - Consolidate JDK Forest into a Single Repository - Local-Variable Type Inference - declarations of local variables with initializers - introducesºvarº - Remove Native-Header Generator Tool (javah) superseded by superior functionality in javac. - Thread-Local Handshakes: - Allows to execute a callback on threads without performing a global VM safepoint. Makes it both possible and cheap to stop individual threads and not just all threads or none. - Time-Based Release Versioning - Root Certificates, providing a default set of root CAs in the JDK. - Heap Allocation on Alternative Memory Devices: - enables the HotSpot VM to allocate the Java object heap on an alternative memory device, such as an NV-DIMM, specified by the user. - Experimental Java-Based JIT Compiler Graal: - Linux/x64 platform only - Additional Unicode Language-Tag Extensions - Removed Features and Options:
v.9(2017/09) └ --XX:ParallerGCThreads and --XX:CICompilerCount are set based on Containers CPU limits (can be overriden) - Calculated from --cpuset-cpus └ Memory Configuration for containers -XX:+UnlockExperimentalVMOptions -XX:+UseGroupMemoryLimitFoHeap - set -XX:MaxRAMFraction to 2 (default is 4) - Java Platform Module System: - based on Project Jigsaw - divides the JDK into a set of modules for combining at run, compile, or build time. - enabling understanding of dependencies across modules. - allows developers to more easily assemble and maintain sophisticated applications. - allows to scale down to smaller devices. - improves security and performance. - aspects include: - application packaging - JDK modularization - reorganizing source code into modules. - Build system is enhanced to compile modules and enforce module boundaries at build time. (Java 9 allows illegal reflective access to help migration) - Reactive Streams: (https://github.com/reactive-streams/reactive-streams-jvm/blob/v1.0.1/README.md#specification) is a small spec, also adopted in Java 9, that defines the interaction between asynchronous components with back pressure. For example a data repository — acting as Publisher, can produce data that an HTTP server — acting as Subscriber, can then write to the response. The main purpose of Reactive Streams is to allow the subscriber to control how fast or how slow the publisher will produce data. - ahead-of-time (AoT) compilation (experimental) - improve startup time, with limited impact on peak performance. -ºREPL (read-eval-print loop)º - jShell: interactively evaluates statements "a la script". - tab completion - automatic addition of needed terminal semicolons. - jShell API for IDE integration. - jShell: interactively evaluates statements "a la script". - Streams API Enhancements - Java 8 Stream API allows processing data declaratively while leveraging multicore architectures. - Java 9 adds methods to conditionally take and drop items from Stream, iterate over Stream elements, and create a stream from a nullable value while expanding the set of Java SE APIs that can serve as Streams sources. - Code cache can be divided in Java 9 - code cache can now be divided into segments to improve performance and allow extensions such as fine-grained locking resulting in improved sweep times - (Datagram Transport Layer Security) DTLS security API - prevent eavesdropping, tampering, and message forgery in client/server communications. - Java 9 deprecates and removes: - Applet API and appletviewer (alternative: Java Web Start) - Concurrent Mark Sweep (CMS) GC. - JVM TI (Tool Interface) hprof (Heap Profiling) agent, superseded in the JVM. - jhat tool, obsoleted by superior heap visualizers and analyzers.
JAVA 8 └ 8u131 First version to support Containers RºWARNº: Do not use any version below that. (TODO) @[https://www.oracle.com/technetwork/java/javase/8-whats-new-2157071.html] @[https://www.qwant.com/?q=what+new+java+8]
Inside the JVM

- JVM anatomy Park:
JVM Implementations

In practical terms, there is only one set of source code for the JDK. 
The source code is hosted in Mercurial at OpenJDK.

Anyone can take that source code, produce a build and publish it on a 
URL. But there is a distinct certification process that should be 
used to ensure the build is valid.

Certification is run by the Java Community Process, which provides a 
Technology Compatibility Kit (TCK, sometimes referred to as the JCK). 
If an organization produces an OpenJDK build that passes the TCK then 
that build can be described as "Java SE compatible".

Note that the build cannot be referred to as "Java SE" without the 
vendor getting a commercial license from Oracle. For example, builds 
from AdoptOpenJDK that pass the TCK are not "Java SE", but "Java SE 
compatible" or "compatible with the Java SE specification". Note also 
that certification is currently on a trust-basis - the results are 
not submitted to the JCP/Oracle for checking and cannot be made 

To summarise, the OpenJDK + Vendor process turns one sourcebase into 
many different builds. 

- Oracle Java
- OpenJ9      (Eclipse "IBM")
  └ https://en.wikipedia.org/wiki/OpenJ9
  └ Pre-built binaries available at AdoptOpenJDK
  └ Compared to Oracle's HotSpot VM, i touts higher
    start-up performance and lower memory consumption 
    at a similar overall throughput.
  └ JIT with all optimization levels.
- OpenJDK
- GraalVM
- Bellsoft Liberica:
  - $free TCK verified OpenJDK distribution for x86, ARM32 and ARM64.
- Azul Systems
- Sap Machine
  JDK for Java 10 and later under the GPL+CE license. 
  They also have a commercial closed-source JVM
- Amazon Corretto:
  zero-cost build of OpenJDK with long-term support that passes the 
  TCK. It is under the standard GPL+CE license of all OpenJDK builds. 
  Amazon will be adding their own patches and running Corretto on AWS

BºJIT compiler optimization levels:º
  - cold
  - warm
  - hot
  - very hot (with profiling)
  - scorching.
  The hotter the optimization level, the better the 
  expected performance, but the higher the cost in terms of 
  CPU and memory.


         │    STACK ("SMALL")          │ HEAP  ("HUGE")
         │ private to each Thread      │ Shared by Threads
Contain  │ - references to heap objects│ - objects
         │ - value types               │ - instance fields
         │ - formal method params      │ - static fields
         │ - exception handler params  │ - array elements

* 1: Ref(erence) types on the stack point to real object in HEAP memory.

Reference Types regarding how the object on the heap is eligible for garbage collection
│ STRONG  │ - Most popular.
│         │ - The object on the heap it is not garbage collected
│         │   while there is a strong reference pointing to it, or if it is
│         │   strongly reachable through a chain of strong references.
│ WEAK    │ - most likely to not survive after the next garbage collection process.
│         │ - Is created like
│         │    WeakReference˂StringBuilder˃ reference =
│         │     = new WeakReference˂˃(new StringBuilder());
│         │ - ºEx.use case: caching:º
│         │   We let the GC remove the object pointed to by the weak reference,
│         │   after which a null will be returned
│         │   See JDK implementation at
│         │   @[https://docs.oracle.com/javase/7/docs/api/java/util/WeakHashMap.html]
│ SOFT    │ - used for more memory-sensitive scenarios
│         │ - Will be garbage collected only when the application is running low on memory.
│         │ - *Java guarantees that all soft referenced objects
│         │   are cleaned up before throwing OutOfMemoryError*
│         │ - is created as follows:
│         │   SoftReference˂StringBuilder˃ reference = new SoftReference˂˃(new StringBuilder());
│ PHANTOM │ - Used to schedule post-mortem cleanup actions, since we know for
│         │   sure that objects are no longer alive.
│         │ - Used only with a reference queue, since the .get() method of
│         │   such references will always return null.
│         │ - ºThese types of references are considered preferable to finalizersº
Force string pool reuse
- Strings are immutable.
- Stored on the heap
- Java manages a string pool in memory,
  reusing strings whenever possible.

String string01 = "297",                                string01 == string02 : true
       string02 = "297",                                string01 == string03 : Rºfalseº *1
       string03 = new Integer(297).toString(),          string01 == string04 : true  *2
       string04 = new Integer(297).toString()º.intern()º; *2

º1: RºPool reuse does not work for dynamically created strings*
*2: If we consider that the computed String will be used quite often,
    we can force the JVM to add it to the string pool by adding the
    .intern() method at the end of computed string:
JVM analyzes the variables from the stack and "marks" all the objects that need to be kept alive.
Then, all the unused objects are cleaned up.

The more garbage there is, and the fewer that objects are marked alive, the faster the process is.

To optimize even more heap memory actually consists of multiple parts (Java 8+):

  │ HEAP     │
  │ SPACES   │
  │ Eden     │ * object are place here upon creation.
  │          │ * "small" ─→ gets full quite fast.
  │          │ * GC runs on the Eden space and marks objects as alive
  │ S0       │ * Eden Objects surviving 1st GC are moved here
  │          │
  │ S1       │ * Eden Objects surviving 2nd GC are moved here
  │          │ * S0   Objects surviving     GC are moved here
  │ Old      │ * Object survives for "N" rounds of GC (N depends on
  │          │   implementation), most likely that it will survive
  │          │   forever, and get moved here
  │          │ * Bigger than Eden and S0,S1. GC doesn't run so often
  │ Metaspace│ * metadata about loaded classes
  │          │   (PermGen Before Java 8)
  │ String   │
  │   pool   │
GC Types
- default GC type is based on the underlying hardware
- programmer can choose which one should be used

   GC TYPE     | Description  / Use-Cases
|Serial GC     | - Single thread collector.
|              | - ºHalt all app threads while executingº
|              | - Mostly applies to ºsmall apps with small data usageº
|              | - Can be enabled through : Oº-XX:+UseSerialGCº
|Parallel GC   | - Multiple threads used for GC
|              | - ºHalt all app threads while executingº
|              | - Also known as throughput collector
|              | - Can be enabled through : Oº-XX:+UseParallelGCº
|Mostly        | - works concurrent to the application, "mostly" not halting threads
|Concurrent GC | - "mostly": There is a period of time for which the threads are paused.
|              |    Still, the pause is kept as short as possible to achieve the best GC performance.
|              | - 2 types of mostly concurrent GCs:
|              |   * Garbage First - high throughput with a reasonable application pause time.
|              |                   - Enabled with the option: Oº-XX:+UseG1GCº
|              |   º Concurrent Mark Sweep: app pause is kept to minimum. ºDeprecated as Java9+*
|              |                   - Enabled with the option: Oº-XX:+UseConcMarkSweepGCº

See also:
Optimization Tips
- To minimize the memory footprint, limit the scope of the variables as much as possible.

- Explicitly refer to null obsolete references making them eligible for GC.

- Avoid finalizers. They slow down the process and they do not guarantee anything.
  Prefer phantom references for cleanup work.

- Do not use strong references where weak or soft references apply.
 ºThe most common memory pitfalls are caching scenarios,when dataº
 ºis held in memory even if it might not be needed.º

- Explicitly specify heap size for the JVM when running the application:
  -  allocate a reasonable initial and maximum amount of memory for the heap.
   OºInitial heap size -Xms512m º – set initial heap     size to  512 megabytes
   OºMaximum heap size -Xmx1024mº – set maximum heap     size to 1024 megabytes
   OºThread stack size -Xss128m º – set thread stack     size to  128 megabytes
   OºYoung genera.size -Xmn256m º – set young generation size to  256 megabytes

REF: @[https://dzone.com/articles/heap-memory-in-java-performance-testing?utm_source=www.oficina24x7.com]
    - Initial Heap Size: -Xms: ˃= 1/64th of physical memory || reasonable minimum.
    - Maximum Heap Size: -XmX: ˂= 1/4 th of physical memory || 1GB.
                  - Set -Xms equal to -Xmx to prevent pauses caused by heap expansion
                  ☞BºSetting Xms/Xmx increase GC predictabilityº.
    JVM settings are recommended for:
    -server               -server                   -server
    -Xms24G -Xmx24G        -Xms4G -Xmx4G            -Xms32G -Xmx32G
                      -XX:MaxGCPauseMillis=200     ← soft goal (JVM) best effort
                      -XX:ParallelGCThreads=20     ← value depends on hosting hardware
                      -XX:ConcGCThreads=5          ← value depends on hosting hardware
                      -XX:InitiatingHeapOccupancyPercent=70 ← Use 0 to force constant 
                                                              GC cycles
    Rº There are 600+ arguments that you can pass to JVM to fine-tune GC and memory º
    Rº If you include other aspects, the number of JVM arguments will easily cross  º
    Rº 1000+. º
       (Or why Data Scientist end up using Python)

- If app OutOfMemoryError-crashes, extra info about memory leak can be obtained through
  Oº–XX:HeapDumpOnOutOfMemoryº, creating a heap dump file

- Use Oº-verbose:gcº to get the garbage collection output.

- Eclipse Memory Analyzer Manual:
˂˂AutoClosable˃˃ (1.7+)
- The java garbage collector can not automatically clean any
  other resource appart from memory. All resources related to
  I/O (virtual/physical devices) must be closed programatically,
  for example sockets, http connections, database connections, ...
  since neither the compiler, not the runtime can not take control of
  external (non-controlled) devices/resources.

  Java 1.7+ includes the interface java.lang.AutoClosable to simplify
  the resource cleaning.

  When a class representing an external resource implement this 
  interface and isºused inside a try-with-resourcesº, its close
  method will be invoqued automatically (the compiler will add 
  the required code).

  Most core java I/O classes  already implemente this interface.

  public class MyClassWithExternalResources 
  implements ºjava.lang.AutoCloseableº, ... {
        private final MyExternalEventListener listener;
        private final MyIODevice device;
        private final MyHTTPConnection connection;
       ºpublic void close()º{
            listener  .close();
            device    .close();

    public class SomeLongRunningClass {
      void useManyResourcesManyTimes(String path)  {

        for (int repeat=0; repeat˂100; repeat++) {
         ºtry (MyClassWithExternalResources i = º
              º new MyClassWithExternalResources(...))º {
         º} catch( ... ) {º

         º// At this point all resouces have been closed.  º
         º// If a runtime exception exits the function the º
         º// resource is also closed.                      º
  final String formatedNumber = String.format("%4d",100);
  See also: [[Format String Checker?]]
  final String[] args = ...
  final String s1 = String.join(List.of(args),"'" )); // ← alt 1: String array to CSV
  final String s1 = String.join(","          ,args)); // ← alt 2: String array to CSV

Bºjava.util.StringJoinerº (1.8+) Concatenate Strings
- Ex:
  "[George:Sally:Fred]" may be constructed as follows:
  final StringJoiner sj = new StringJoiner(
                             ":" /*Delimiter*/,
                             "[" /*prefix*/,
                             "]" /*sufix*/);
  String desiredString = sj.toString();


  List˂Integer˃ numbers = Arrays.asList(1, 2, 3, 4);
  String commaSeparatedNumbers = numbers.stream()
      .map(i -˃ i.toString())
      .collect(ºCollectors.joining(", ")º);
Reading file
BºReading as lines of textº
  final File input = new File("input.txt"); 
  final String result =
        Files.toString(input, Charsets.UTF_8);   // ← Alt 1.(Guava) Read to String
                                                     RºWARN:ºOnly for small sizesº

  final File input = new File("input.txt");
  final List result =
       Files.readLines(input, Charsets.UTF_8);   // ← Alt 2.(Guava) Read to List
             ^^^^^^^^^                              RºWARN:ºOnly for small sizesº
             readFirstLine() can be useful sometimes

  final File input = new File("input.txt");
  final CharSource source = 
      Files.asCharSource(input, Charsets.UTF_8); // ← Alt 3.(Guava) Use CharSource
  final String result = source.read();           // ← RºWARN:ºOnly for small sizesº

  final File input1 = new File("input1.txt"),
             input2 = new File("input1.txt");
  final CharSource
      source1 = Files.asCharSource(input1, Charsets.UTF_8),
      source2 = Files.asCharSource(input2, Charsets.UTF_8),
      source  = 
         CharSource.concat(source1, source2);   // ← Alt 3.2(Guava) Concat CharSources
  final String result = source.read();

  final FileReader reader = new FileReader("input.txt");
  final String result =
        CharStreams.toString(reader);          // ← Alt 4. (Big Files) CharStreams
  reader.close();                              // ← RºWARN:º Don't forget to close

BºRead file as bytesº
  final File file = new File("input.raw");
  final ByteSource source                      // ← Alt 1: (Guava) Use ByteSource
        = Files.asByteSource(file).              
          .slice(20 /*initial offset*/, 100 /*len */);
  final byte[] result = source.read();

  FileInputStream reader = 
     new FileInputStream("input.raw");        // ←   Using FileInputStream
  byte[] result =
     ByteStreams.toByteArray(reader);         // ← + ByteStreams

  final URL url =
       Resources.getResource("test.txt");     // ← Read Resource in classpath
  final String resource =
       Resources.toString(url, Charsets.UTF_8);

Reading big files ºtry (º final FileInputStream inputStream = new FileInputStream(path); final Scanner sc = ← Use Scanner to read line-by-line new Scanner(inputStream, "UTF-8"); º) {º while (sc.hasNextLine()) { final String line = sc.nextLine(); // ... do any process ... if (sc.ioException() != null) { ← // scanner captures ioExceptions // handle error // It's good to have a look } } º} finally {º ... º}º final LineIterator it = ← Alt 2. From Apache Commons IO FileUtils.lineIterator(theFile, "UTF-8"); try { while (it.hasNext()) { String line = it.nextLine(); ← Read line-by-line // ... } } finally { LineIterator.closeQuietly(it); ← Close resources }
- JDK 1.8+
- "deprecates" java.util.(Date|Calendar|TimeZome)
- All the classes are IMMUTABLE and THREAD-SAFE
Oºimport java.time.Instant;º
Oºimport java.time.ZonedDateTime;º
Oºimport java.time.ZoneId;º
Oºimport java.util.concurrent.TimeUnit;º
OºInstantºBºtimestampº = OºInstantº.now();              // Create from system clock
          Bºtimestampº.plus(Duration.ofSeconds(10));    // Add 10 seconds

  │OºInstantº to String                 │ OºInstantº from String
  │(format with time-zone)              │ (parse string)
  │OºZonedDateTimeº zdt1 =              │
  │     OºZonedDateTimeº.of             │ String sExpiresAt="2013-05-30T23:38:23.085Z";
  │       (                             │ OºZonedDateTimeºzdt2 = OºZonedDateTimeº.parse(sExpiresAt);
  │         2017, 6, 30           ,     │
  │         1, 2, 3               ,     │ OºInstantºi1 = OºInstantº.from(zdt1),
  │         (int) TimeUnit.             │           i2 = OºInstantº.from(zdt2);
  │               MILLISECONDS.         │
  │               toNanos(100),         │
  │         ZoneId.of("Europe/Paris")   │
  │       );          ^^^               │
  │     Ex: "Z","-02:00","Asia/Tokyo",..│
  │String s1 = zdt1.toString();         │

date  (none)           DateFormat.getDateInstance(DateFormat.DEFAULT, getLocale())
      short            DateFormat.getDateInstance(DateFormat.SHORT, getLocale())
      medium           DateFormat.getDateInstance(DateFormat.DEFAULT, getLocale())
      long             DateFormat.getDateInstance(DateFormat.LONG, getLocale())
      full             DateFormat.getDateInstance(DateFormat.FULL, getLocale())
      SubformatPattern new SimpleDateFormat(subformatPattern, getLocale())

time  (none)           DateFormat.getTimeInstance(DateFormat.DEFAULT, getLocale())
      short            DateFormat.getTimeInstance(DateFormat.SHORT, getLocale())
      medium           DateFormat.getTimeInstance(DateFormat.DEFAULT, getLocale())
      long             DateFormat.getTimeInstance(DateFormat.LONG, getLocale())
      full             DateFormat.getTimeInstance(DateFormat.FULL, getLocale())
      SubformatPattern new SimpleDateFormat(subformatPattern, getLocale())

ºCompatibility with Java ˂=1.7º
- (java.util.) Date, Calendar and TimeZone
  "buggy" classes/subclasses were used.
  - Calendar class was NOT type safe
  - Mutable non-threadsafe classes
  - Favored programming errors
    (unusual numbering of months,..)

- Next compatibility conversion methods were added in 1.8:
  - Calendar.toInstant()
  - GregorianCalendar.toZonedDateTime()
  - GregorianCalendar.from(ZonedDateTime) (Using default local)
  - Date.from(Instant)
  - Date.toInstant()
  - TimeZone.toZoneId()

ºjava.time. Package summaryº
Clock              A clock providing access to the current instant, date and
                   time using a time-zone.
Duration           A time-based amount of time, such as '34.5 seconds'.
Instant            An instantaneous point on the time-line.
LocalDate          A date without a time-zone in the ISO-8601 calendar system,
                   such as 2007-12-03.
LocalDateTime      A date-time without a time-zone in the ISO-8601 calendar
                   system, such as 2007-12-03T10:15:30.
LocalTime          A time without a time-zone in the ISO-8601 calendar system,
                   such as 10:15:30.
MonthDay           A month-day in the ISO-8601 calendar system, such as --12-03.
OffsetDateTime     A date-time with an offset from UTC/Greenwich in the ISO-8601
                   calendar system, such as 2007-12-03T10:15:30+01:00.
OffsetTime         A time with an offset from UTC/Greenwich in the ISO-8601
                   calendar system, such as 10:15:30+01:00.
Period             A date-based amount of time in the ISO-8601 calendar system,
                    such as '2 years, 3 months and 4 days'.
Year               A year in the ISO-8601 calendar system, such as 2007.
YearMonth          A year-month in the ISO-8601 calendar system, such as 2007-12
ZonedDateTime      A date-time with a time-zone in the ISO-8601 calendar system,
                   such as 2007-12-03T10:15:30+01:00 Europe/Paris.
ZoneId             A time-zone ID, such as Europe/Paris.
ZoneOffset         A time-zone offset from Greenwich/UTC, such as +02:00.

Enum               Description
DayOfWeek          A day-of-week, such as 'Tuesday'.
Month              A month-of-year, such as 'July'.

Exception          Description
DateTimeException  Exception used to indicate a problem while calculating a date-time.

Java 9
- A number of parsing and formatting changes have been incorporated in Java 9 to
bring the functionality closer to Unicode Locale Data Markup Language (LDML).
These changes have been supervised by Stephen Colebourne, creator of the popular
 date-time library JodaTime, precursor of the new java.time component in Java 8.
Abiding by the Unicode standard will provide better interoperability with other
non-Java systems.

- LDML is the language used by the Unicode Common Locale Data Repository (CLDR),
  a project of the Unicode Consortium to gather and store locale data from
  different parts of the world, enabling application developers to better adapt
  their programs to different cultures. Among other things, LDML deals with dates,
  times, and timezones, and more particularly with date formatting and parsing.
  The following is an extract of new features coming in Java 9 that bring java.time
  closer to the LDML specification:

  - JDK-8148947, DateTimeFormatter pattern letter ‘g’: the letter ‘g’, as
    specified in LDML, indicates a “Modified Julian day”; this is different from a
    normal Julian day in the sense that a) it depends on local time, rather than GMT,
    and b) it demarcates days at midnight, as opposed to noon.
  - JDK-8155823, Add date-time patterns 'v' and 'vvvv’: ‘v’ and ‘vvvv’ are LDML
    formats to indicate “generic non-location format”, e.g. “Pacific Time”, as
    opposed to the “generic location format” with specifies a city, like
    “Los Angeles Time”.
  - JDK-8148949, DateTimeFormatter pattern letters ‘A’, ’n’, ’N’: although LDML
    doesn’t specify formats ’n’ and ’N’, it does specify ‘A’, but the current
    behaviour in Java doesn’t match that of the spec. ‘A’ is meant to represent the
    total number of milliseconds elapsed in the day, with variable width, but
    currently Java treats this as fixed with: if ‘AA’ is specified as a pattern, it
    will fail to parse any value that is further than 99 milliseconds in the day.
    ’n’ and ’N’ are just Java extensions to the standard to represent nanoseconds
    within the second, and nanoseconds within the day, respectively.
  - JDK-8079628, java.time DateTimeFormatter containing "DD" fails on three-digit
    day-of-year value: similar to the previous problem, but with ‘D’ representing
    days within a year. If one specifies “DD” as a pattern, it will fail to parse
    “123” as the 123th day of the year.
- As previously mentioned, a better alignment with the LDML will ease
  interoperability across systems, since there are multiple technologies that
  have adopted the LDML to some degree. Microsoft .NET uses LDML for general
  interexchange of locale data, and there are packages available for Node.js
  and Ruby, just to mention a few.
- JDK 1.5+
- Represents time durations at a given unit of granularity and
  provides utility methods to convert across units, and to perform
  timing and delay operations in these units.

  void      sleep(long timeout)
  void  timedJoin(Thread thread, long timeout)
  void   timedWait(Object obj, long timeout)
ºpredefined annotation types in java.lang:º
- @Deprecated
- @Override
- @SuppressWarnings
- @SafeVarargs (SDK 1.? +) applied to a method/constructor,
                           asserts that the code does not perform
                           potentially unsafe operations
                           on its varargs parameter.
                           removing  related warnings

ºAnnotation types are a form of interfaceº
DECLARATION(interface is preceded by the @ sign) │ USAGE
  @Documented                                    │
  @interface ClassPreamble {                     │   @ClassPreamble (
     String   author        ()              ;    │      author         = "John Doe"      ,
     String   date          ()              ;    │      date           = "3/17/2002"     ,
     int      currentRev    () default 1    ;    │      currentRev     = 6               ,
     String   lastModified  () default "N/A";    │      lastModified   = "4/12/2004"     ,
     String   lastModifiedBy() default "N/A";    │      lastModifiedBy = "Jane Doe"      ,
     String[] reviewers     ()              ;    │      reviewers      = {"Alice", "Bob"}
  }                                              │   )
                                                 │ public class Generation3List extends Generation2List {
                                                 │     // ...
                                                 │ }
new @Interned MyObject();              ← Class instance creation expression

myString = (@NonNull String) str;      ← Type cast (1.8+)

class UnmodifiableList˂T˃ implements   ← implements clause
      @Readonly List˂@Readonly T˃
      { ... }

void monitorTemperature() throws       ← throws exception declaration
@Critical TemperatureException { ... }

@SuppressWarnings(value = "unchecked") ← Predefined standard annotations
void myMethod() { ... }
@SuppressWarnings({"unchecked", "deprecation"})
void myMethod() { ... }
(Annotations applying to other annotations)

RetentionPolicy.SOURCE: retained only in source (ignored by the compiler)
RetentionPolicy.CLASS : retained by compiler    (ignored by the JVM)
RetentionPolicy.RUNTIME:retained by JVM, can be queried at Runtime

º@Documentedº                     º@Repeatableº
- indicates that whenever the     - (1.8+)
  specified annotation is used    - targeted annotation can be applied
  those elements should be          more than once to the same
  documented using the Javadoc      declaration or type use.
  tool. (By default, annotations    Ex:
  are not included in Javadoc.)     @Author(name = "Jane Doe")
                                    @Author(name = "John Smith")
                                    class MyClass { ... }

º@Targetº                          º@Inheritedº
º(field,type,class..)º             - targeted annotation type can be inherited
- restrict targeted java-language    from the super class. (false by default.)
  elements where the annotation      When the user queries the annotation type
  can be applied:                    and the class has no annotation for this
  - ElementType.ANNOTATION_TYPE      type, the class'superclass is queried for
  - ElementType.CONSTRUCTOR          the annotation type.
  - ElementType.FIELD
  - ElementType.LOCAL_VARIABLE
  - ElementType.METHOD
  - ElementType.PACKAGE
  - ElementType.PARAMETER
  - ElementType.TYPE (1.8+)
SLF4j Logging
Simple Log Facade or abstraction for various logging frameworks
(e.g. java.util.logging, logback, log4j) allowing the end user
to plug in the desired logging framework at deployment time.

  ˂?xml version="1.0" encoding="UTF-8"?˃
    ˂root level="ALL"˃                                  ← Apply to all packages/levels
      ˂!-- ˂jmxConfigurator /˃ --˃
      ˂appender Bºname="APPENDER_FILE"ºclass="ch.qos.logback.core.rolling.RollingFileAppender"˃
          ˂rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"˃
              ˂!-- daily rollover --˃
              ˂!-- keep 2 days' worth of history capped at 1MB total size --˃
            ..(see encoder for APPENDER_STDOUT ..)

      ˂appender Gºname="APPENDER_STDOUT"ºclass="ch.qos.logback.core.ConsoleAppender"˃
          ˂pattern˃%d{HH:mm:ss.SSS} | %-5level | %thread | %logger{1} |
    %m%n%rEx{full,                                      ←☞ filter "Noise" in stack trace. ºREF 1º
              java.lang.reflect.Method,                 ← remove Java reflection
              org.apache.catalina,                      ← remove catalina engine
              org.springframework.aop,                  ← remove "almost" whole Spring framework
              org.springframework.security,             ←
              org.springframework.transaction,          ←
              org.springframework.web,                  ←
              net.sf.cglib,                             ← remove CGLIB classes.
              ByCGLIB                                   ←

    ˂root level="WARN"˃                                 ← Aply to all packages/WARN+ logs
        ˂appender-ref Bºref="APPENDER_FILE"  º/˃
        ˂appender-ref Gºref="APPENDER_STDOUT"º/˃
    ˂logger name="my.company."          level="INFO" /˃ ← Detail level for packages
    ˂logger name="my.company.package01" level="DEBUG"/˃
    ˂logger name="org.eclipse.jetty"    level="WARN" /˃
  ºREF 1º: @[https://www.nurkiewicz.com/2011/09/logging-exceptions-root-cause-first.html]


    ˂artifactId˃logback-classic˂/artifactId˃            ← add Bºlogbackº facade
        ˂groupId˃org.slf4j˂/groupId˃                    ←  Avoid error next start-up:
        ˂artifactId˃slf4j-jdk14˂/artifactId˃               "SLF4J: Class path contains multiple SLF4J bindings."
      ˂/exclusion˃                                         "   slf4j-jdk14-1.7.21.jar!...StaticLoggerBinder.class"
  Rº˂/exclusions˃º                                         "logback-classic-1.1.7.jar!...StaticLoggerBinder.class"

BºExample Ussageº
  import org.slf4j.Logger;
  import org.slf4j.LoggerFactory;
    class MyClass {
      private static final Logger log =
      if (log.isDebugEnabled()) {  ←······················· composed string "Lorem ipsum..."is built 
          log.debug("Lorem ipsum... @{} {}",  ←············ before calling log.debug.
                    "param1", "param2");                    "wrapper" if(log.isDebugEnabled())  avoid 
      }                                                     unnecesary string processing (saving can 
      ...                                                   be "huge" if log.debug is inside a loop. 

MDC: "Better Way of Logging" @[https://dzone.com/articles/mdc-better-way-of-logging-1] RºPROBLEMº: How do we related logs together originating from a (single user, single data-flow) that are processed by different threads, HTTP requests (think of Single Page Apps), or component? GºSOLUTIONº: Use mapped diagnostic context (MDC). BºMapped Diagnostic Contextº: - Built into the logging framework, it is supported by log4j, log4j2, and SL4J/logback. - Allows to capture custom ºkey/valueº diagnostic data, accessible to the appender when the log message is actually written. - MDC structure isºinternally attachedºto the executing threadº in the same way a ThreadLocal variable would be. BºMDC How To:º - At the start of the thread, fill MDC with custom information (MDC API also allows to remove info later on if it doesn't apply) - Log the message - MDCºSummarized APIº: public class MDC { publicºstaticºvoid put (String key, String val); // ← Add to ºcurrent Threadº publicºstaticºString get (String key); ºContext Mapº publicºstaticºvoid remove(String key); publicºstaticºvoid clear(); // ← Clear all entries } NOTE: child threads does not automatically inherit a copy of the current diagnostic context. - Best pattern for microservices: - Ex: //ºSTEP 1:ºOverride Qºinterceptor layerº ^^^^^^^^^^^^^^^^^ // """single place where call // execution passes through""". public class ServiceInterceptor extendsQºHandlerInterceptorAdapterº{ private final staticºLogger LOGGERº= Logger.getLogger(ServiceInterceptor.class); Qº@Overrideº public boolean preHandle( HttpServletRequest request, HttpServletResponse response, Object object) throws Exception { MDC.put("userId" , request.getHeader("UserId" )); MDC.put("sessionId ", request.getHeader("SessionId")); MDC.put("requestId" , request.getHeader("RequestId")); } //ºSTEP 2:ºChange log appender pattern to retrieve variables // stored in the MDC. ˂appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender"˃ ˂layout˃ ˂Pattern˃%X{userId} %X{sessionId} %X{requestId} - %m%n˂/Pattern˃ ˂/layout˃ ˂/appender˃ Log output will look some like: 17:53:25,496 http─8080─20 INFO Service1.execute(77) U1001 ┌ sessId01 ┌ reqId_1_1 req service 1 17:53:25,497 http─8080─26 INFO Service1.execute(77) U1002 │ sessId02 ┐ │ reqId_2_1┐ req service 1 17:53:25,550 http─8080─26 INFO Service1.execute(112) U1002 │ sessId02 ┤ │ reqId_2_1┤ Req data 17:53:25,555 http─8080─20 INFO Service1.execute(112) U1001 ├ sessId01 │ ├ reqId_1_1│ Req data 17:53:25,617 http─8080─27 INFO Service2.execute(50) U1001 ├ sessId01 │ ┌│ reqId_1_2│ req service 2 17:53:25,615 http─8080─27 INFO Service2.execute(89) U1001 ├ sessId01 │ │├ reqId_1_1│ req data 17:53:25,637 http─8080─29 INFO Service2.execute(50) U1002 │ sessId02 ┤ ││ reqId_2_2│┐ req service 2 17:53:25,665 http─8080─29 INFO Service2.execute(89) U1002 │ sessId02 ┤ ││ reqId_2_1┤│ req data 17:53:25,568 http─8080─20 INFO Service1.execute(120) U1001 ├ sessId01 │ └│ reqId_1_2││ req OK 17:53:25,584 http─8080─26 INFO Service1.execute(120) U1002 │ sessId02 ┤ │ reqId_2_1┘│ req OK 17:53:25,701 http─8080─27 INFO Service2.execute(113) U1001 ├ sessId01 │ └ reqId_1_1 │ req OK ... ... ... ... ... : ... : ... : ... 17:53:25,710 http─8080─29 INFO Service2.execute(113) U1002 sessId02 ┘ reqId_2_2 ┘ req OK ...
(parameters) -˃ expression
(parameters) -˃  { statements; }

// takes a Long, returns a String
Function˂Long, String˃ f = (l) -˃ l.toString();

// takes nothing, gives you Thread
Supplier˂Thread˃ s = Thread::currentThread;

//  takes a string as the parameter
Consumer˂String˃ c = System.out::println;

// use lambdas in streams
new ArrayList˂String˃().stream()....

// peek: Debug streams without changes
peek ( e -˃ System.out.println(e)). ...

// map: Convert every element into something
map ( e -˃ e.hashCode())...

// filter (hc -˃ (hc % 2) == 0) ...

// collect all values from the stream
- JDK 1.8+
- Incomplete but good enough to cover the "shape" of many lambda expressions and
 method references representing abstract concepts like functions, actions, or predicates
- The @FunctionalInterface is used to capture design intent (not needed by compiler).
- In documenting functional interfaces, or referring to variables typed as
  functional interfaces, it is common to refer directly to those abstract concepts,
  for example using "this function" instead of "the function represented by this object".
- Each functional interface has a single abstract method, called the functional method for that
  functional interface, to which the lambda expression's parameter and return types are matched or adapted.
- Functional interfaces can provide a target type in multiple contexts, such as assignment context, method invocation,
  or cast context:
  |Predicate˂String˃ p = String::isEmpty;           // Assignment context
  |stream.filter(e -˃ e.getSize() ˃ 10)...          // Method invocation context
  |stream.map((ToIntFunction) e -˃ e.getSize())...  // Cast context

Defined functions in 1.8
           Interface Summary                │           Interface Description
                  BiConsumer‹T,U›           │opt. accepting two input arguments and returns no result
  (|Double|Int|Long)Consumer‹T›             │opt. accepting a single (Object|double|int|long)input argument and returns no result
Obj(Double|Int|Long)Consumer‹T›             │opt. accepting an object-valued and a (double|int|long)-valued argument, and returns no result
        (|Double|Long|Int)Function‹(T,)R›   │func. that accepts an (T|double,long,int) argument and produces a result
       (|Double|Long)ToIntFunction          │func. that accepts a (T|double|long)argument and produces an int-valued result
(ToDouble|ToLong|ToInt|)BiFunction‹(T,)U,R› │func. that accepts two arguments and produces an (T,double,long,int) result.
           To(Double|Long)Function‹T›       │func. that produces a (double|long)-valued result
(Int|Long|Double)To(Int|Long|Double)Function│func. that accepts a (int|long|double) argument and produces a (int|long|double) result
 (|Int|Long|Double)UnaryOperator‹T›         │op. on a single (T|int|long|double) operand that produces a result of the same type
(Double|Long|Int|)BinaryOperator‹T›         │op. upon two (T|int|long|double) operands and producing a result of the same type
                BiPredicate‹T,U›            │predicate (boolean-valued function) of two arguments
(|Int|Long|Double)Predicate‹T›              │predicate (boolean-valued function) of one (T|int|long|double) argument
(|Boolean|Int|Long|Double)Supplier(‹T›)     │supplier of (T|Boolean|Int|long|double) results
UUID: 78f67bbf-3150-4c06-a48c-9b61ff350aea
Collection Decission Tree
                                  │  Allows  │
                    ┌─── YES ─────┤Duplicates├──  NO  ───────┐
                    │   List to   └──────────┘  Set to       │
                    │  be selected              be selected  │
                    │                                        v
                    v                                    ┌───────────┐☜ order established at
        ┌─────────────────────┐                          │ Maintains │  write time
        │  Unknown number     │                          │ºINSERTIONº│
   ┌─NO─┤of elements will be  ├YES─┐           ┌───YES───┤  ºORDERº? ├──NO──┐  order requested
   │    │added and/or index   │    │           │         └───────────┘      │  at read time
   │    │based search will not│    │           v                            ↓  ☟
   │    │be frequent?         │    │     QºLinkedHashSetº           ┌────────────┐
   │    └─────────────────────┘    │                                │ Mantains   │
   v                               v                           ┌─NO─┤ºREAD ORDERº├YES┐
BºArrayListº           BºLinkedListº                           │    │(alpha,...)?│   │
                                                               │    └────────────┘   │
                                                               │                     │
                                                               v                     v
                                                          QºHashSetº           QºTreeSetº

Standard Rºnon-concurrentº SDK:
       │                                IMPLEMENTATIONS
       │ Hash Table        │ Resizable Array   │Balanced Tree │ Linked List │ HashTable+LinkedList
       │                   │                   │              │             │
│˂Set˃ │ HashSet           │                   │  TreeSet     │             │ LinkedHashSet
│      │                   │                   │              │             │
│˂List˃│                   │ ArrayList         │              │ LinkedList  │
│      │                   │ Vector            │              │ LinkedList  │
│˂Map˃ │ HashMap,Hashtable │                   │  TreeMap     │             │ LinkedHashMap

│Collection       │ Thread-safe                ┃          YOUR DATA              ┃           OPERATIONS    ALLOWED       │
│                 │ alternative                ┃─────────────────────────────────┃───────────────────────────────────────┤
│class            │                            ┃Individu│Key-val.│Duplica│Primite┃ Iteration Order │Fast │ Random Access │
│                 │                            ┃elements│  pairs │element│support┃FIFO │Sorted│LIFO│'has'│By  │By   │By  │
│                 │                            ┃        │        │support│       ┃     │      │    │check│Key │Val  │Idx │
│HashMap          │ ConcurrentHashMap          ┃        │YES     │       │       ┃     │      │    │YES  │ YES│     │    │
│                 │                            ┃        │        │       │       ┃     │      │    │     │    │     │    │
│SortedMap        │ ?                          ┃        │YES     │       │       ┃     │      │    │?    │ YES│     │    │
│                 │                            ┃        │        │       │       ┃     │      │    │     │    │     │    │
│NavigableMap @1  │ ?                          ┃        │YES     │       │       ┃     │      │    │?    │ YES│     │    │
│                 │                            ┃        │        │       │       ┃     │      │    │     │    │     │    │
│HashBiMap(Guava) │ Maps.syncrhonizedBiMap     ┃        │YES     │       │       ┃     │      │    │YES  │ YES│YES  │    │
│                 │ (new HashBiMap())          ┃        │        │       │       ┃     │      │    │     │    │     │    │
│ArrayListMultimap│ Maps.synchronizedMultiMap  ┃        │YES     │YES    │       ┃     │      │    │YES  │ YES│     │    │
│   (Guava)       │ (new ArrayListMultimap())  ┃        │        │       │       ┃     │      │    │     │    │     │    │
│LinkedHashMap    │ Collections.syncrhonizedMap┃        │YES     │       │       ┃YES  │      │    │YES  │ YES│     │    │
│                 │ (new LinkedHashMap())      ┃        │        │       │       ┃     │      │    │     │    │     │    │
│TreeMap          │ ConcurrentSkipListMap      ┃        │YES     │       │       ┃     │YES   │    │YES  │ YES│     │    │
│Int2IntMap       │                            ┃        │YES     │       │YES    ┃     │      │    │YES  │ YES│     │YES │
│(Fastutil)       │                            ┃        │        │       │       ┃     │      │    │     │    │     │    │
│ArrayList        │ CopyOnWriteArrayList       ┃YES     │        │YES    │       ┃YES  │      │YES │     │    │     │YES │
│HashSet          │ Collections.newSetFromMap  ┃YES     │        │       │       ┃     │      │    │YES  │    │YES  │    │
│                 │ (new ConcurrentHashMap())  ┃        │        │       │       ┃     │      │    │     │    │     │    │
│IntArrayList     │                            ┃YES     │        │YES    │YES    ┃YES  │      │YES │     │    │     │YES │
│(Fastutil)       │                            ┃        │        │       │       ┃     │      │    │     │    │     │    │
│PriorityQueue    │ PriorityBlockingQueue      ┃YES     │        │YES    │       ┃     │YES   │    │     │    │     │    │
│ArrayDeque       │ ArrayBlockingQueue         ┃YES     │        │YES    │       ┃YES  │      │YES │     │    │     │    │

 Collection class │  Random access by idx/key │ Search/Contains │ Instert
 ArrayList        │  O(1)                     │ O(n)            │ O(n)
 HashSet          │  O(1)                     │ O(1)            │ O(1)
 HashMap          │  O(1)                     │ O(1)            │ O(1)
 TreeMap          │  O(log(n))                │ O(log(n))       │ O(log(n))

@1 NavigableMap: SortedMap with additional methods for finding entries
                 by their ordered position in the key set.
                 So potentially this can remove the need for iterating
                 in the first place - you might be able to find the
                 specific entry you are after using the higherEntry,
                 lowerEntry, ceilingEntry, or floorEntry methods. The
                 descendingMap method even gives you an explicit method
                 of reversing the traversal order.

Graph Structures

Interface               Description
Graph˂N˃                An interface for graph-structured data, whose edges are
                        anonymous entities with no identity or information of their own.
MutableGraph˂N˃         A subinterface of Graph which adds mutation methods.
MutableNetwork˂N,E˃     A subinterface of Network which adds mutation methods.
MutableValueGraph˂N,V˃  A subinterface of ValueGraph which adds mutation methods.
Network˂N,E˃            An interface for graph-structured data, whose edges are unique objects.
ValueGraph˂N,V˃         An interface for graph- structured data, whose edges have associated 
                        non-unique values.
final List˂String˃ myList =                  ← final forbid re-asigning the list,
      Arrays.asList("one", "two", "three");     but its content is still mutable

final List˂String˃ myInmmutableList =        ← inmutable version (thread safe)

final Map˂String,String˃ myMap =             ← Mutable map (even if 'final' used)
      new HashMap˂String,String˃();
        myMap.put("k1", "k2");
        myMap.put("v1", "v2");
final Map˂String,String˃ myInmutableMap =    ← Inmutable version of map

final HashMap˂String,String˃ data =          ← Java 7+
          "k1","v1",                         ← must have an even number of elements

final Map˂String, String˃ test =             ← Java 9+
      Map.of("k1", "k2", "v1", "v2");        ← must have an even number of elements

final Map˂String, String˃ test2 =            ← Java 9+
      Map.ofEntries( entry("k1", "k2"),...);

final Map˂String, String˃ test =             ← Guava ImmutableMap
      ImmutableMap.of("k1","v1", ...);         works only with up to 5 key/value pairs

final Map˂String, String˃ test =             ← Guava ImmutableMap alternative
      ImmutableMap.˂String, String˃builder()
      .put("k1", "v1")
      .put("k2", "v2")
ºfor collection walkº
 │for (
 │     int idx = 0;
 │     idx ˂ºcollectionºOº.lengthº;
 │     idx++) {
 │  //r*WARN:
 │  //  - Very slow for LinkedLists*
 │  //  - Faster for other List implementations
 │  type array_element =ºcollectionºOº.get(idx);º

 │for (                                                │ for (
 │     Iterator iterator =ºcollectionºOº.iterator()º;  │      iterable_type iterable_element Oº:collectionº
 │     Oºiterator.hasNext();º) {                       │     ) {
 │  //ºBest option when removing/modifying elementsº/  │   //ºBest option when NOT remov./modify. elementsº
 │  type type = (type) Oºiterator.next()º;             │   ...
 │}                                                    │ }

ºSTREAMS (Java 8+)º
ºcollectionºOº.forEachº((Oºitº) -˃ {
 Shorter alternative:


ºentrySet Java 5+º                         | ºUSING ITERATORSº                           | ºUSING FOR-EACHº
 Map˂String, String˃ map = ...             |  Iterator˂Map.Entry˂Integer, Integer˃˃ºitº= |  long i = 0;
 for (                                     |      map.entrySet()Oº.iterator()º;          |  for (
      Map.Entry˂String, String˃ Oºentryº : |  while (ºitºOº.hasNext()º) {                |       Map.Entry˂Integer, Integer˃ pair :
     ºmapºOº.entrySet()º)                  |      Map.Entry˂Integer, Integer˃ pair =     |         ºmyMapºOº.entrySet()º) {
 {                                         |       ºitºOº.next()º;                       |    log.debug( "{}:{}"      ,
   log.debug( "{}:{}"          ,           |    log.debug( "{}:{}"      ,                |             pair.getKey()  ,
            Oºentryº.getKey()  ,           |             pair.getKey()  ,                |             pair.getValue() );
            Oºentryº.getValue());          |             pair.getValue() );              |  }
 }                                         |  }

ºforEach (java 8+)º                        |ºiterating over keySetº          | ºfor + Map.Entryº
 final long[] i = {0};                     | long i = 0;                     |  for (
 ºmapºOº.forEachº((k, v) -˂ i[0] += k + v);| Iterator˂Integer˃ itr2 =        |     Iterator˂Map.Entry˂Integer, Integer˃˃
                                           |    mapOº.keySet().iterator()º;  |        entries = map.entrySet().iterator() ;
                                           | while (itr2.hasNext()) {        |     entries.hasNext();
                                           |     Integer key = itr2.next();  |     ) {
                                           |     i += key + map.get(key);    |   Map.Entry˂Integer, Integer˃
                                           | }                               |      entry = entries.next();
                                                                             |   log.debug( "{}:{}"      ,
                                                                             |           entry.getKey()  ,
                                                                             |           entry.getValue() );
                                                                             |  }
|ºStream API (1.8+)º              |ºStream API parallel (1.8+)º
| final long[] i = {0};           | final long[] i = {0};
| ºmapºOº.entrySet().stream()º    | ºmapºOº.entrySet().stream()º.
|  .forEach(                      |   parallel()*.forEach(e -˃
|    ...e.getKey(), e.getValue()  |    ... e.getKey() , e.getValue()
| );                              | )
Utility class
- Utility class with static methods that operate on or return collections

Collections.EMPTY_LIST // The empty (immutable) list
Collections.EMPTY_MAP  // The empty (immutable) map
Collections.EMPTY_SET  // The empty (immutable) set

Enumeration  Collections.emptyEnumeration()  // Returns an enumeration that has no elements.
Iterator     Collections.emptyIterator()     // Returns an iterator that has no elements.
List         Collections.emptyList()         // Returns the empty list (immutable)                                              .
ListIterator Collections.emptyListIterator() // Returns a list iterator that has no elements.
Map          Collections.emptyMap()          // Returns the empty map (immutable)                                              .
Set          Collections.emptySet()          // Returns the empty set (immutable)                                              .

boolean      Collections.addAll(Collection c, T... elements)           Adds all elements to collection 'c'
Queue        Collections.asLifoQueue(Deque deque)                      deque → LIFO Queue view
int          Collections.binarySearch(List list, T key)                Searches key into List using binary search.
int          Collections.binarySearch(List list, T key, Comparator c)  Searches key into List using binary search an comparator.

Collection   Collections.checkedCollection(Collection c, Class type)   Returns a dynamically typesafe view of input collection/list/...
List         Collections.checkedList(List list, Class type)
Map          Collections.checkedMap
                         (Map m, Class keyType, Class valueType)
Set          Collections.checkedSet(Set s, Class type)
SortedMap    Collections.checkedSortedMap
                         (SortedMap m, Class keyType, Class valueType)

SortedSet    Collections.checkedSortedSet(SortedSet s, Class type)
void         Collections.copy(List dest, List src)                     Copies src list elements to dest list
boolean      Collections.disjoint(Collection c1, Collection c2)        true if c1/c2 have no elements in common.

Enumeration  Collections.enumeration(Collection c)                     Returns an enumeration over the specified collection.

void         Collections.fill(List list, T obj)                        Replaces all of the elements of the specified list with the specified element.
int          Collections.frequency(Collection c, Object o)             Returns the number of elements in the specified collection equal to the specified object.
int          Collections.indexOfSubList(List source, List target)      Returns the starting position of the first occurrence of the specified target list within the specified source list, or -1 if there is no such occurrence.
int          Collections.lastIndexOfSubList(List source, List target)  Returns the starting position of the last occurrence of the specified target list within the specified source list, or -1 if there is no such occurrence.
ArrayList    Collections.list(Enumeration e)                           Returns an array list containing the elements returned by the specified enumeration in the order they are returned by the enumeration.
T            Collections.max(Collection coll)                          Returns the maximum element of the given collection, according to the natural ordering of its elements.
T            Collections.max(Collection coll, Comparator comp)         Returns the maximum element of the given collection, according to the order induced by the specified comparator.
T            Collections.min(Collection coll)                          Returns the minimum element of the given collection, according to the natural ordering of its elements.
T            Collections.min(Collection coll, Comparator comp)         Returns the minimum element of the given collection, according to the order induced by the specified comparator.
List         Collections.nCopies(int n, T o)                           Returns an immutable list consisting of n copies of the specified object.
Set          Collections.newSetFromMap(Map map)                        Returns a set backed by the specified map.
boolean      Collections.replaceAll(List list, T oldVal, T newVal)     Replaces all occurrences of one specified value in a list with another.
void         Collections.reverse(List list)                            Reverses the order of the elements in the specified list.
Comparator   Collections.reverseOrder()                                Returns a comparator that imposes the reverse of the natural ordering on a collection of objects that implement the Comparable interface.
Comparator   Collections.reverseOrder(Comparator cmp)                  Returns a comparator that imposes the reverse ordering of the specified comparator.
void         Collections.rotate(List list, int distance)               Rotates the elements in the specified list by the specified distance.
void         Collections.shuffle(List list)                            Randomly permutes the specified list using a default source of randomness.
void         Collections.shuffle(List list, Random rnd)                Randomly permute the specified list using the specified source of randomness.
Set          Collections.singleton(T o)                                Returns an immutable set containing only the specified object.
List         Collections.singletonList(T o)                            Returns an immutable list containing only the specified object.
Map          Collections.singletonMap(K key, V value)                  Returns an immutable map, mapping only the specified key to the specified value.
void         Collections.sort(List list)                               Sorts the specified list into ascending order, according to the natural ordering of its elements.
void         Collections.sort(List list, Comparator c)                 Sorts the specified list according to the order induced by the specified comparator.
void         Collections.swap(List list, int i, int j)                 Swaps the elements at the specified positions in the specified list.
Collection   Collections.synchronizedCollection(Collection c)          Returns a synchronized (thread-safe) collection backed by the specified collection.
List         Collections.synchronizedList(List list)                   Returns a synchronized (thread-safe) list backed by the specified list.
Map          Collections.synchronizedMap(Map m)                        Returns a synchronized (thread-safe) map backed by the specified map.
Set          Collections.synchronizedSet(Set s)                        Returns a synchronized (thread-safe) set backed by the specified set.
SortedMap    Collections.synchronizedSortedMap(SortedMap m)            Returns a synchronized (thread-safe) sorted map backed by the specified sorted map.
SortedSet    Collections.synchronizedSortedSet(SortedSet s)            Returns a synchronized (thread-safe) sorted set backed by the specified sorted set.
Collection   Collections.unmodifiableCollection(Collection c)          Returns an unmodifiable view of the specified collection.
List         Collections.unmodifiableList(List list)                   Returns an unmodifiable view of the specified list.
Map          Collections.unmodifiableMap(Map m)                        Returns an unmodifiable view of the specified map.
Set          Collections.unmodifiableSet(Set s)                        Returns an unmodifiable view of the specified set.
SortedMap    Collections.unmodifiableSortedMap(SortedMap m)            Returns an unmodifiable view of the specified sorted map.
SortedSet    Collections.unmodifiableSortedSet(SortedSet s)            Returns an unmodifiable view of the specified sorted set.
- Fast and compact type-specific collections for Java
  Great default choice for collections of primitive types,
  like int or long. Also handles big collections with more than 2
  31 elements well.
Eclipse Collections
(Originated from Goldman Sachs gs-collection:
- Features you want with the collections you need
  Previously known as gs-collections, this library
  includes almost any collection you might
  need: primitive type collections, multimaps,
  bidirectional maps and so on.
Guava Collections
- Google Core Libraries for Java 6+
  Perhaps the default collection library for Java
  projects. Contains a magnitude of convenient
  methods for creating collection, like fluent
  builders, as well as advanced collection types
˂˂Enumeration˃˃(1.0) vs ˂˂Iterator˃˃(1.7)

- both interfaces will give successive elements

- Iterators allow the caller to remove elements from
  the underlying collection during the iteration with
  well-defined semantics.
  (additional remove method)
- Iterators Method names have been improved.

- Iterators are fail-fast:
  - If thread A changes the collection, while
       thread B is traversing it, the iterator implementation
       will try to throw a ConcurrentModificationException
       (best effort since it can not always be guaranteed)
  - The fail-fast behavior of iterators can be used only to
    detect bugs sin the best effort doesn't warrant its trigger.
  - newer 'concurrent' collections will never throw it.
    Reading thread B will traverse the collection "snapshot" at
    the start of read.

-ºIterator should be preferred over Enumerationº
  taking the place of Enumeration in collections framework

  Enumeration     │ Iterator
  hasMoreElement()│ hasNext()
  nextElement()   │ next()
                  │ remove() ← optional: not implemented in many classes
NIO (1.4+)
- Replaced OLD blocking IO based on [ byte/char, read-or-write streams ]
┌──────────┐     ┌──────────────┐                                  
│ºCORE NIOº│     │ºNON─BLOCKINGº│                                  
├──────────┴───┐ ├──────────────┴────────────────────────────────┐ 
│ ─BºCHANNELS º│ │· a thread requests a channel the intention    │ 
│  ─ read/write│ │  to read/write data into a buffer:            │ 
│ ─BºBUFFERS  º│ │  · While the channel moves data into/from     │ 
│ ─BºSELECTORSº│ │   the buffer, the thread continues another job│ 
└──────────────┘ │  · When data is ready, the thread is notified │ 
Channel  : File,Datagram/UDP,Socket/TCP,ServerSocket,...           
Buffer of: Byte|Char|Double|Float|Int|Long|Short|MappedByte)Buffer 
│ ─ components like Pipe and FileLock can be considered               │
│   "utility classes" supporting the first three ones.                │
│                                                                     │
│ ─ "SELECTORS" objects monitor one+ channels for events              │
│   (connection opened, data arrived, ..):                            │
│   ─ Thus, a single thread can monitor multiple channels for data.   │
│     (Very handy if app has many connections/Channels/clients open   │
│     but with low traffic on each connection.                        │
│   ─ To use selectors:                                               │
│     ─ Instantiate the selector                                      │
│     ─ Register one+ channels with it                                │

│ºBUFFERº                                                                                          │
│ ºATTRIBUTESº                                            ºMETHODSº                                │
│          ┌─────────────────┬───────────────────────────┐ ┌─────────────┬───────────────────────┐ │
│          │ºwriteºmode      │ ºreadºmode                │ │rewind()     │                       │ │
│ ┌────────┼─────────────────┴───────────────────────────┤ │             │                       │ │
│ │capacity│ fixed size of memory block implementing     │ ├─────────────┼───────────────────────│ │
│ │        │ the buffer                                  │ │clear()      │                       │ │
│ ├────────┼─────────────────┬───────────────────────────┤ │compact()    │                       │ │
│ │position│ starts at 0,    │ starts at 0 (after "flip")│ ├─────────────┼───────────────────────│ │
│ │        │ increase at each│ increase at each          │ │mark()       │"bookmark position"    │ │
│ ├────────┼─────────────────┼───────────────────────────┤ │reset()      │ and return "bookmark" │ │
│ │   limit│ element written │ element read              │ ├─────────────┼───────────────────────│ │
│ │        │ == capacity     │ == last written position  │ │equals()     │using only the         │ │
│ └────────┴─────────────────┴───────────────────────────┘ │compareTo()  │remaining-to-read bytes│ │
│                                                          │             │for the computation    │ │
│                                                          └─────────────┴───────────────────────┘ │

│ºSEQUENCE TO READ/WRITE DATAº                  ┌───────┐
│try (  /* try-with 1.7+ */                     │SUMMARY│
│  RandomAccessFile GºaFileº =                  ├───────┴──────────────────────────
│    new RandomAccessFile("nio-data.txt", "rw") │-1 ) Write data into the Buffer
│) throws IOException {                         │-2 ) Call buffer.ºflip()º
│  FileChannel BºinChannelº =                   │     switch writing/reading mode
│    GºaFileº.getChannel();                     │-3 ) Read data out of the Buffer
│                                               │-4a) buffer.clear();  ← alt1: clear all buffer
│  ByteBuffer Oºbufº=                           │-4b) buffer.compact() ← alt2: clear only data read
│      ByteBuffer.allocate(48 /*capacity*/);    ├────────────────────────────────────
│                                               │ channelIn → (data) → buffer
│                                               │ buffer    → (data) → channelOut
│  int ºbytesReadº=                             └────────────────────────────────────
│       BºinChannelº.read(Oºbufº);  // ← Oºbufº now
│                                          in write mode
│Rºwhileº (ºbytesReadº != -1)
│    Oºbufº.ºflipº();               // ← Oºbufº now
│    while(Oºbufº.hasRemaining()){         in read mode
│        System.out.print(
│           (char) Oºbufº.get()     // ← alt.1: read 1 byte
│        );                                     at a time
│        // channel2.write(Oºbufº)  // ← alt.2: read data
│    }                                         in channel2
│    Oºbufº.clear();                // ← make buffer
│                                        ready-for-writing
│    ºbytesReadº = BºinChannelº     // ← Oºbufº now
│                    .read(Oºbufº);        in write mode

│ ºscattering channel readº                        │ºscattering-write to channelº                     │
│ - channel → read to → buffer1, buffer2, ....     │ - buffer1, buffer2, ...→ write to → channel      │
│ - Ex:                                            │ - ex:                                            │
│   ByteBuffer header = ByteBuffer.allocate(128);  │   ByteBuffer header = ByteBuffer.allocate(128);  │
│   ByteBuffer body   = ByteBuffer.allocate(1024); │   ByteBuffer body   = ByteBuffer.allocate(1024); │
│   ByteBuffer[] OºbufferArrayº = { header, body };│   ByteBuffer[] OºbufferArrayº = { header, body };│
│ Bºchannelº.read(OºbufferArrayº);                 │ Bºchannelº.write(OºbufferArrayº);                │
│            ^^^^                                  │                                                  │
│ fill up one buffer before moving on to the next  │                                                  │
│ (*not suited for undefined size messages)        │                                                  │

 -ºIf one the the channels is FileChannelº:
   - FileChannelºtransferTo()/transferFrom()º can be used to move data between channels
   RºWARN:ºSome SocketChannel implementations may transfer only the data the SocketChannel
     has ready in its internal buffer here and now
   │  FileChannelGºfromChannelº=                 │  FileChannelGºfromChannelº=                 │
   │     (new RandomAccessFile("from.txt", "rw"))│     (new RandomAccessFile("from.txt", "rw"))│
   │     .getChannel(),                          │     .getChannel(),                          │
   │  FileChannel BºtoChannelº=                  │  FileChannel BºtoChannelº=                  │
   │     (new RandomAccessFile(  "to.txt", "rw"))│     (new RandomAccessFile(  "to.txt", "rw"))│
   │     .getChannel();                          │     .getChannel();                          │
   │  long count    =GºfromChannelº.size();      │  long count    = ;                          │
   │BºtoChannelºº.transferFromº(                 │GºfromChannelºº.transferToº(                 │
   │     GºfromChannelº,                         │      0 /*position*/,                        │
   │       0       , // posit.in dest-file to    │    GºfromChannelº.size() /*count*/,         │
   │                 // start writing from       │    BºtoChannelº);                           │
   │       maxCount  /* max-bytes to transfer*/  │                                             │
   │  );                ^^^^^^^^^                │                                             │
   │                    constrained by data      │                                             │
   │                    in source                │                                             │
API tree
           Bits ByteOrder CharBufferSpliterator
           HeapByteBuffer Heap(Byte|Char|...)Buffer(R) HeapCharBuffer

                    Channel Channels CompletionHandler FileLock MembershipKey Pipe Selector SelectionKey

                   Charset(|Decoder|Encoder) StandardCharsets
                   CoderResult CodingErrorAction

                          FileAttribute FileTime

                AccessMode CopyMoveHelper CopyOption DirectoryStream Files
                LinkOption LinkPermission Path        PathMatcher        Paths
                OpenOption    Standard(Copy|Open)Option
                Watchable      Watch(Event|Key|Service)

- A Selector allows a single thread to manage multiple channels 
  (network connections), by examining which ones are ready for 

- A channel that "fires an event" is also said to be "ready" for that event.

ºREGISTERING A SELECTORº                              │ºUSING SELECTORSº
ºAND ASSIGNING CHANNELSº                              │
    │  Selector BºselectoRº = Selector.open();        │ ºSTEP 1º
    │  channel.configureBlocking(false);              │  call one of the select() methods
    │          ^^^^^^^^^^^^^^^^^^^^^^^^               │  (upon registering 1+ channels)
    │   //     non-blocking-mode required             │  int select(long mSecTimeout) ← block until channel/s ready
    │   // RºWARN:º FileChannel can NOT be switched   │             └────(optional)┘
    │   //   into NON-blocking mode and so            │  int selectNow()              ← Don't block even if none read
    │   //   they can NOT be used with selectors.     │  └┬┘
┌───→GºSelectionKey keyº = channel.register(          │  indicates how many channels became ready since last select() call.
│   │    Bºselectorº,                                 │
│   │    SelectionKey.OP_READ |                       │ ºSTEP 2º
│   │    SelectionKey.OP_WRITE);                      │  examine ready channels returned by select like:
│   │                 ^^^^^^^                         │  Set˂SelectionKey˃ selectedKeys =
│   │                 Or-set of interest:             │                    BºselectoRº.OºselectedKeys()º;
│   │                 OP_CONNECT / OP_ACCEPT          │  Iterator˂SelectionKey˃ keyIterator =
│   │                 OP_READ    / OP_WRITE           │                    selectedKeys.iterator();
│   │  ^^^^^^^^^^^^^^^^                               │  while(keyIterator.hasNext()) {
│   │                                                 │    GºSelectionKey keyº= keyIterator.next();
│ ┌─→Gºkeyº.attach(extraInfoObject);                  │      //  "cast to proper channel"
│ │ │  Object attachedObj =                           │             if (Gºkeyº.isAcceptable ()) {
│ │ │     selectionKey.attachment();                  │        ... connection accepted by ServerSocketChannel
│ │ │                                                 │      } else if (Gºkeyº.isConnectable()) {
│ │ │                                                 │        ... connection established with remote server
│ │ │ // After selection                              │      } else if (Gºkeyº.isReadable   ()) {
│ │ │ // ^^^^^^^^^^^^^^^                              │        ... channel ready for reading
│ │ │ // explained later                              │      } else if (Gºkeyº.isWritable   ()) {
│ │ │                                                 │        ... channel ready for writing
│ │ │ // Alternative 1:                               │      }
│ │ │ int OºreadySetº= Gºkeyº.readyOps();             │      keyIterator.remove();
│ │ │ boolean isAcceptable  =                         │  }
│ │ │         OºreadySetº ⅋ SelectionKey.OP_ACCEPT;   │ ºSTEP 3º
│ │ │ ...                                             │Bºselectorº.close()
│ │ │ // Alternative 2:                               │            ^^^^^
│ │ │ Gºkeyº.isAcceptable();                          │   must be called after finishing ussage,
│ │                                                   │   invalidating all SelectionKey instances
│ └─── (optional) user attached object,               │   registered with this Selector.
│      handy way to recognize a given                 │   The channels themselves are not closed.
│      channel, provide extra info
│      (buffer/s,...)
└─── Gºkeyº can be queried like:
       intOºinterestSetº = Gºkeyº.interestOps()*;
       boolean isInterestedInAccept
           = OºinterestSetº ⅋ SelectionKey.OP_ACCEPT;

  - A thread blocked by a call to select() can be forced to leave the select() method,
     even if no channels are yet ready by having a different thread call
     the BºselectoRº.ºwakeup()º method on the Selector which the first thread has
     called select() on:
     - The thread waiting inside select() will then return immediately.
     - If a different thread calls wakeup() and no thread is currently
       blocked inside select(), the next thread that calls select()
       will "wake up" immediately.
- Java NIO FileChannel: channel connected to a file allowing to
      read data from  and write data to a file.
- A FileChannel canNOT be set into non-blocking mode:
  It always runs in blocking mode

- Reading from FileChannel (Writting to buffer):
  |/* You cannot open a FileChannel directly,
  | * first you obtain a FileChannel via an (Input|Output)Stream or a RandomAccessFile
  | */
  |RandomAccessFile GºaFileº     = new RandomAccessFile("data/nio-data.txt", "rw");
  |// Reading from channel
  |try (  /* try-with 1.7+ */
  |  RandomAccessFile GºaFileº = new RandomAccessFile("data/nio-data.txt", "rw")
  |) throws IOException {
  |  FileChannel BºinChannelº = GºaFileº.getChannel();
  |  ByteBuffer Oºbufº = ByteBuffer.allocate(48 /* capacity*/);
  |  int ºbytesReadº = BºinChannelº.read(Oºbufº); // Oºbufº now in write mode
  |  while (ºbytesReadº != -1) {
  |    Oºbufº.flip();                            // Oºbufº now in read mode
  |    while(Oºbufº.hasRemaining()){
  |        // alt. read data directly, 1 byte at a time
  |        System.out.print((char) Oºbufº.get());
  |        // alt. read data in channel
  |        // anotherChannel.write(Oºbufº)
  |    }
  |    Oºbufº.clear(); //make buffer ready for writing
  |    ºbytesReadº = BºinChannelº.read(Oºbufº); // Oºbufº now in write mode
  |  }

- Writing to a FileChannel (reading from buffer)
  | String newData = "......" + System.currentTimeMillis();
  | ByteBuffer Oºbufº = ByteBuffer.allocate(48);
  | Oºbufº.clear();
  | Oºbufº.put(newData.getBytes());
  | Oºbufº.flip(); // change buffer from write to read
  | ºwhile(Oºbufº.hasRemaining()) {º channelOº.writeº(Oºbufº); º}º
  | channel.close();

- FileChannel Position
  | long pos = fileChannel.position(); // obtain current position
  | fileChannel.position(pos +123); // change position

   - If you set the position after the end of the file,
     and try to read from the channel, you will get -1
   - If you set the position after the end of the file,
     and write to the channel, the file will be expanded
     to fit the position and written data. This may result
     in a "file hole", where the physical file on
     the disk has gaps in the written data.

- FileChannel Size
  | long fileSize = fileChannel.size();
                            size of the file
                            connected to channel

- FileChannel (file) Truncate
  | fileChannel.truncate(1024 /*length*/);

- FileChannel Force:
  flushes all unwritten data from the channel and OS cache to the disk
  | channel.force(true /* flush also file meta-data like permissions....*/);
- Pipe: one-way data connection between two threads
  └"=="  source channel   ← One threads writes to sink
        +  sink channel   ← One threads reads from source
    ByteBuffer Oºbufº = ByteBuffer.allocate(48);
    Pipe pipe = Pipe.open();
    Pipe.SinkChannel sinkChannel = pipe.sink();
    String newData = "..." + System.currentTimeMillis();
    while(Oºbufº.hasRemaining()) { sinkChannel.write(Oºbufº); }
    To read from a Pipe you need to access the source channel. Here is how that is done:
    Pipe.SourceChannel sourceChannel = pipe.source();
    int ºbytesReadº = BºinChannelº.read(buf2);
  There are two ways a SocketChannel can be created:
  // Opening a SocketChannel
  SocketChannel socketChannel = SocketChannel.open();
  socketChannel.connect(new InetSocketAddress("http://jenkov.com", 80));
  // Reading (writing to buffer)
  ByteBuffer Oºbufº = ByteBuffer.allocate(48);
  int ºbytesReadº = socketChannel.read(Oºbufº); // If -1 is returned, the end-of-stream is reached (connection is closed)
  // Writing to a SocketChannel
  String newData = "..." + System.currentTimeMillis();
  ByteBuffer Oºbufº = ByteBuffer.allocate(48);
  while(Oºbufº.hasRemaining()) { channel.write(Oºbufº); }
Non-blocking Mode
- socketChannelº.configureBlocking(false)º;
- Calls to connect(), read() and write() will not block
- In non-blocking mode connect() calls may return before
  the connection is established:
  - To determine whether the connection is established
    use finishConnect() like this:

  | socketChannel.configureBlocking(false);
  | socketChannel.connect(
  |   new InetSocketAddress("http://jenkov.com", 80));
  | while(! socketChannel.finishConnect() ){
  |     //wait, or do something else...
  | }

NOTE: non-blocking works much better with Selector's
ServerSocketChannel serverSocketChannel = ServerSocketChannel.open();

serverSocketChannel.socket().bind(new InetSocketAddress(9999));

    SocketChannel socketChannel =
            serverSocketChannel.accept(); // in blocking mode waits until incoming connection arrives
    if(socketChannel != null /* always false in blocking mode */){
        //do something with socketChannel...

    //do something with socketChannel...

Datagram Channel
- Since UDP is a connection-less network protocol, you cannot just
  by default read and write to a DatagramChannel like you do from
  other channels. Instead you send and receive packets of data

  | DatagramChannel channel = DatagramChannel.open();
  | channel.socket().bind(new InetSocketAddress(9999));
  | ByteBuffer Oºbufº = ByteBuffer.allocate(48);
  | Oºbufº.clear();
  | // WARN: if read data is bigger than buffer size remaining data is discarded silently
  | channel.receive(Oºbufº);
  | // Write to channel
  | String newData = "..." + System.currentTimeMillis();
  | Oºbufº.clear();
  | Oºbufº.put(newData.getBytes());
  | Oºbufº.flip();
  | // WARN:  No notice is received about packet delivery (UDP does not make any guarantees)
  | int bytesSent = channel.send(Oºbufº, new InetSocketAddress("jenkov.com", 80));
  | // Alternatively you can "Connect" to a Specific Address. Since UDP is connection-less,
  | // connecting to a remote address just means that the DatagramChannel can only send/receive
  | // data packets from a given specific address.
  | channel.connect(new InetSocketAddress("jenkov.com", 80));
  | int ºbytesReadº = channel.read(Oºbufº);
  | int bytesSent = channel.write(Oºbufº);
NonBlocking Server
- @[http://tutorials.jenkov.com/java-nio/non-blocking-server.html]
- @[https://github.com/jjenkov/java-nio-server]
- Non-blocking IO Pipelines:
read-write pipeline: ºchannelInº → selector → component → ... → componentN → ºchannelOutº
read-only  pipeline: ºchannelInº → selector → component → ... → componentN
write-only pipeline:                        component → ... → componentN → ºchannelOutº
Note: It is the component that initiates reading of data from the Channel via the Selector
read-pipeline read from stream/channelIn and split data into messages like:

Data   → Message → Message
Stream   Reader    Stream

-ºA blocking Message Reader/Writer is simpler, since itº
 ºhas never to handle situations where no data was readº
 ºfrom the stream, or where only a partial message wasº
 ºread from the stream and message parsing needs to beº
 ºresumed later.º
-ºThe drawback of blocking is the requirement of separateº
 ºthreads for each parallel stream, which is a problem if theº
 ºserver has thousands of concurrent connectionsº
- Each thread will take between 320K (32 bit JVM) and
  1024K (64 bit JVM) memory for its stack
- Queue messages can be used to reduce the problem. However,
  this design requires that the inbound client  streams
  send data reasonably often and input is processed fast.
   If the inbound client stream may be inactive for longer periods
  attached to hidden clients, a high number of inactive
  connections may actually block all the threads in the thread
  That means that the server becomes slow to respond or even
- A non-blocking IO pipeline can use a single thread to
  read messages from multiple non-blocking streams.
    When in non-blocking mode, a stream may return 0 or more
  bytes when you attempt to read data from it.
  When you call select() or selectNow() on the Selector it
  gives you only the SelectableChannel instances ("connected
  clients") that actually has data to read.

OºComponent ──→ STEP 1: select() ──→ Selector ←──┬─→ Channel1º
Oº    ↑                                │         ┼─→ Channel2º
Oº    └───← STEP 2: ready channels ←───┘         └─→ Channel3º

- Reading Partial Messages: Data sent by "ready" channels can
  contain fractions/incomplete messages:
  - The Message Reader looks needs to check if the data block
    contains at least one full message, adn storing partial ones.
    (maybe using one Message Reader per Channel to avoid mixing messages)
  - To store Partial Messages two design should be considered:
    - copy data as little as possible for better performance
    - We want full messages to be stored in consecutive byte to
      make parsing messages easier
  - Some protocol message formats are encoded using a TLV format
    (Type, Length, Value).
    Memory management is much easier since we known immediately
    how much memory to allocate for the message. No memory is
    wasted at the end of a buffer that is only partially used.
  - The fact that TLV encodings makes memory management easier is
    one of the reasons why HTTP 1.1 is such a terrible protocol.
    That is one of the problems trying to be fixed in HTTP 2.0 where
    data is transported in LTV encoded frames.
  - Writing Partial Messages: channelOut.write(ByteBuffer) in
    non-blocking mode gives no guarantee about how many of the
    bytes in the ByteBuffer is being written. The method returns
    how many bytes were written, so it is possible to keep track
    of the number of written bytes.
  - Just like with the Message Reader, a Message Writer is used
    per channel to handle all the details.
   (partial writes, message queues, resizable buffers, protocol aware tricks,...)

-ºAll in all a non-blocking server ends up with three "pipelines" itº
 ºneeds to execute regularly:º
  - The read pipeline which checks for new incoming data from
    the open connections.
  - The process pipeline which processes any full messages received.
  - The write pipeline which checks if it can write any outgoing
    messages to any of the open connections
Path (1.7+)
- Represents a file/directory path in the FS
- Similar to java.io.File but with some minor differences.
// Ussage
import java.nio.file.Path;
import java.nio.file.Paths;

Path path = Paths.get("/var/lib/myAppData/myfile.txt");
System.out.println("Current dir:"+Paths.get(".").toAbsolutePath());
- java.nio.file.Files provides several methods for manipulating FS files/directories:
- uses Path instances:

boolean pathExists = ºFiles.existsº(pathInstance,
            new LinkOption[]{ LinkOption.NOFOLLOW_LINKS});

Path newDir = ºFiles.createDirectoryº(path);

ºFiles.copyº(sourcePath, destinationPath);
ºFiles.copyº(sourcePath, destinationPath, StandardCopyOption.REPLACE_EXISTING);

ºFiles.moveº(sourcePath, destinationPath, StandardCopyOption.REPLACE_EXISTING);


Files.walkFileTree(Paths.get("data"), new FileVisitor() {
  @Override public FileVisitResult preVisitDirectory(Path dir, BasicFileAttributes attrs) throws IOException {

  @Override public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {

  @Override public FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException {

  @Override public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {
Asyncrhonous FileChannel 1.7+
read/write data from/to files asynchronously

Path path = Paths.get("data/test.xml");
AsynchronousFileChannel fileChannel =
    AsynchronousFileChannel.open(path, StandardOpenOption.READ);

// Reading Data, Alt 1: Via a Future
  Future operation = fileChannel.read(/*ByteBuffeR*/buffer, 0 /*start position to read from */);
  while(!operation.isDone());  // WARN: not a very efficient use of the CPU
  byte[] data = new byte[buffer.limit()];
  System.out.println(new String(data));

// Reading Data Alt 2: Via a CompletionHandler
fileChannel.read(buffer, position, buffer, new CompletionHandler() {
    public void completed(Integer numBytesRead, ByteBuffer attachment) {
        // NOTE: attachment is a reference to the third parameter passed to .read read()
        System.out.println("numBytesRead = " + numBytesRead);
        byte[] data = new byte[attachment.limit()];
        System.out.println(new String(data));

    public void failed(Throwable exc, ByteBuffer attachment) { ...  }

// Writing data:
AsynchronousFileChannel fileChannel =
    AsynchronousFileChannel.open(path, StandardOpenOption.WRITE);

// Writing Data: Alt 1: Via a Future
  Future operation = fileChannel.write(buffer, position);

// Writing Data: Alt 2: Via CompletionHandler
  fileChannel.write(buffer, position, buffer, new CompletionHandler˂Integer, ByteBuffer˃() {

      @Override public void completed(Integer result, ByteBuffer attachment) { /* ... */ }
      @Override public void failed   (Throwable exc , ByteBuffer attachment) { /* ... */ }
API REF: @[https://docs.oracle.com/en/java/javase/11/docs/api/java.net.http/java/net/http/package-summary.html]
JEP: @[https://openjdk.java.net/jeps/321]

TODO: HTTPClient Quick intro:
- BºOkHTTP vs java.net.http.HTTPClientº
From @[https://stackoverflow.com/questions/42392778/okhttp-or-httpclient-which-offers-better-functionality-and-more-efficiency]
 BºOkHTTP PROs over        º          RºOkHTTP CONs over        º
 Bºjava.net.http.HTTPClientº          Rºjava.net.http.HTTPClientº
 - built-in response cache.           - timeout like configuration can not be
 - web sockets.                         modified after singleton connection.
 - Simpler API.                       - Requires (small) extra non-JDK dependencies
 - Better defaults                      (okIO and okHTTP itself) in non-Android
 - easier to use efficiently.           deployments
 - Better URL model.
 - Android support. (RºHTTPClientº
 Rºsupported in Android?º)
 - Better cookie model.
 - Better headers model.
 - Better call model.
 - canceling calls is easy.
 - Carefully managed TLS defaults
   secure and widely compatible.
 - Retrofit compatibility 
   (Brilliant API for REST).
 - eorks with okIO great library
   for data streams.
 - less code to learn
 - 1+ billion Android devices 
   using it interlally
 - Standard in Android 5.0+ (API level 21+). 


BºDAILY USSAGEº @[https://square.github.io/okhttp/recipes/]

  import java.io.IOException;
  import okhttp3.OkHttpClient;
  import okhttp3.Request;
  import okhttp3.Response;
  import okhttp3.MediaType;                         // ← For POST
  import okhttp3.RequestBody;                       // ← For POST
  OkHttpClient client = new OkHttpClient();
  Request request = new Request.Builder()
  try ( 
    Response res =
       client.newCall(request).execute()            // ← Exec. (GET) Request
  ) {

  final String MediaType =  "application/json; charset=utf-8";
  final String jsonBody = "{...}";
  RequestBody body = RequestBody.create(jsonBody,   //  ← POST: prepare body rquest
  Request request = new Request.Builder()
      .post(body)                                   //  ← POST: prepare body rquest
  try (
    Response res =
  ) {
    return res.body().string();

- "efficient by default".

- HTTP/2 support allows all requests to the same host to share a socket.
- Connection pooling reduces request latency (if HTTP/2 isn't available).
- Transparent GZIP shrinks download sizes.
- Response caching avoids the network completely for repeat requests.

- OkHttp perseveres when the network is troublesome: it will silently recover
  from common connection problems.
BºIf target service has multiple IP addresses OkHttp will attempt alternate
  addresses if the first connect failsº.
BºThis is necessary for IPv4+IPv6 and for services hosted in redundant data centersº.

   OkHttp supports modern TLS features (TLS 1.3, ALPN, certificate pinning). It
  can be configured to fall back for broad connectivity.

- request/response API is designed with ºfluent builders and immutabilityº
-.synch/sync+callback API.

- Ex: Balancing connections with OKHttp:
complements java.nio
- makes it much easier to access, store, and process your data.
- It started as a component of OkHttp, the capable HTTP client
  included in Android. It's well-exercised and ready to solve new problems.
man 1 jcmd

- Sends diagnostic command requests to a running JVM.

- It must be used on the same machine on which the JVM is running and 
have the same effective user and group identifiers that were used to 
launch the JVM.

- Ussage Summary:
  $ jcmd [-l]  # ← print list of running Java PIDs.
  $ jcmd pid|main-class PerfCounter.print  ← Send diagnostic command PerfCounter.print to PID JVM
                                             $ jcmd help to see the list of available diagnostic command

  $ jcmd pid|main-class -f filename        ←  file from which to read diagnostic commands to send to JVM

  $ jcmd pid|main-class command[ arguments]

 $º$ jcmd $PID º
  → The following commands are available:
  → Compiler.CodeHeap_Analytics
  → Compiler.codecache
  → Compiler.codelist
  → Compiler.directives_add
  → Compiler.directives_clear
  → Compiler.directives_print
  → Compiler.directives_remove
  → Compiler.queue
  → GC.class_histogram
  → GC.class_stats
  → GC.finalizer_info
  → GC.heap_dump
  → GC.heap_info
  → GC.run
  → GC.run_finalization
  → JFR.check
  → JFR.configure
  → JFR.dump
  → JFR.start
  → JFR.stop
  → JVMTI.agent_load
  → JVMTI.data_dump
  → ManagementAgent.start
  → ManagementAgent.start_local
  → ManagementAgent.status
  → ManagementAgent.stop
  → Thread.print
  → VM.class_hierarchy
  → VM.classloader_stats
  → VM.classloaders
  → VM.command_line
  → VM.dynlibs
  → VM.flags
  → VM.info
  → VM.log
  → VM.metaspace
  → VM.native_memory
  → VM.print_touched_methods
  → VM.set_flag
  → VM.stringtable
  → VM.symboltable
  → VM.system_properties
  → VM.systemdictionary
  → VM.uptime
  → VM.version
  → help

JVisualVM (standard on the JDK)
- displays memory usage and other useful things:
- Visual tool integrating commandline JDK tools and
  lightweight profiling capabilities.
- RºWARN: Deprecated by Flight Recorder?º
  Flight Recorder works both for development and production
  while JVisualVM has performance hits

Deadlock analysis
Flight Recorder

- created originally in 1998 by students from the Royal 
  Institute of Technology in Stockhoml as part of the JRockit JVM 
  distribution by Appeal Virtual Machines.
- built directly into the JDK, it Bºcan monitor performance accuratelyº. 
  with about only Bº2% overhead(production friendly)º. 
  │ JRE      ┌────────┐ │                      Mission Control (Visual Console) 
  │          │ JFR    ----→ myRecording.jfr  → |JMC| (it can also connect
  │          │ engine │ │   - compact log of          directly to running JVM)
  │          └────────┘ │   OºJVM eventsº
  $ java ... Oº-XX:StartFlightRecordingº ...

  - As a result the metrics provided by default in JFR are aimed more 
    towards the JVM's raw operations, vs high-level metrics like 
    request/response times like:
    - advanced garbage collection analysis:
       Unlike common tools report garbage collection statistics,
       JFR details on what garbage was collected and who threw it away.
       allowing developers to improve performance by:
       - identifying what to improve,
       - realize when tunning garbage collection is the wrong solution.

- Flight Recorder is an automated black box recorder that is already
  present inside the JVM and acts to record information.
  - Mission Control is the visual console, run on a different system
  that enables operators to control the black box by evaluating metrics
  or creating performance snapshots.
- Unlike external performance monitoring systems, JFR is built directly
  into the JDK and can monitor performance in an accurate manner that
  does not mislead readers via safe points or sampling. The result of
  JFR is accurate performance diagnostics where the act of measuring
  incurs only about a 2% overhead. These diagnostics provide developers
  and operators with the ability to gather actual performance data
  instead of making guesses or pointing fingers.
  - @[http://psy-lob-saw.blogspot.com/2015/12/safepoints.html]
  - @[http://psy-lob-saw.blogspot.com/2016/02/why-most-sampling-java-profilers-are.html]
- While many profilers focus on high-level metrics, such as
  request/response load times, the concept of a "web request" does
  not exist down at the JDK’s layer. As a result the metrics provided
  by default in JFR are aimed more towards the JVM's raw operations.
  One feature in particular is advanced garbage collection analysis.
  Unlike common tools that simply report garbage collection statistics,
  the analysis capabilities within JFR provide details on what garbage
  was collected and who threw it away. This feature drives two conclusions
  that improve performance:
  - developers can identify specifically what to improve,
  - realize other when tuning garbage collection is the wrong solution.

- Free of use starting with Java 11+
- Backported to OpenJDK 8
- (JEP 328)

CRaSH shell
- Connect to any JVM running CRaSH through SSH, telnet or web.
- Monitor and/or use virtual machine resources:
  JMX, database access, threads, memory usage, ...
- Embed CRaSH and expose services via a command line interface.
- Hot reload provides rapid development.
- Officially embedded as Spring Boot remote shell.

Eclipse Mem.Analizer
"""he Eclipse Memory Analyzer is a fast and feature-rich Java heap
  analyzer that helps you find memory leaks and reduce memory consumption.

  Use the Memory Analyzer to analyze productive heap dumps with hundreds of
  millions of objects, quickly calculate the retained sizes of objects, see
  who is preventing the Garbage Collector from collecting objects, run a
  report to automatically extract leak suspects.

It can provide reports and warnings similar to:
  (REF: @[https://www.youtube.com/watch?v=5joejuE2rEM])
  The classloader/component "sum.misc.Launcher$AppClassLoader@0x123412"
  occupies 607,654,123(38,27%) bytes.
RºThe memory is accumulated in one instanceº of
  java.util.LinkedList$Entry loaded by 'system class loader'
[root@spark ~]# yum install systemtap systemtap-runtime-java

JAVA                                              SystemTap Profiling script
package com.premiseo;                             #!/usr/bin/env stap

import java.lang.*;                               global counter,timespent,t
import java.io.BufferedReader;
import java.io.InputStreamReader;                 probe begin {
import java.io.IOException;                         printf("Press Ctrl+C to stop profiling\n")
class Example {                                     timespent=0
   public static void                             }
     loop_and_wait(int n)
         throws InterruptedException{             probe java("com.premiseo.Example").class("Example").method("loop_and_wait")
         System.out.println(                      {
            "Waiting "+n+"ms... Tick");             counter++
         Thread.sleep(n);                           t=gettimeofday_ms()
     }                                            }

   public static void main(String[] args) {       probe java("com.premiseo.Example").class("Example").method("loop_and_wait").return
      System.out.println("PID = "+                {
          java.lang.management.                     timespent+=gettimeofday_ms()-t
              ManagementFactory.                  }
                     getName().split("@")[0]);    probe end {
      System.out.println(                            printf("Number of calls for loop_and_wait method: %ld \n",    counter)
              "Press key when ready ...");           printf("Time Spent in method loop_and_wait: %ld msecs \n", timespent)
      try {                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
        BufferedReader in =                       profiling loop_and_wait:
           new BufferedReader(                    counts number of times the
              new InputStreamReader(System.in));  loop_and_wait method has been called,
        String next = in.readLine();              and the time spent in this method execution.
      } catch (IOException ioe) {

      try {
        for (int i=0;i˂10;i++) {
      } catch (InterruptedException ie) {
Fast Thread
- Java Thread Dump Analyzer
- Troubleshoot JVM crashes, slowdowns, memory leaks, freezes, CPU Spikes
- Instant RCA (don't wait for Vendors)
- Machine Learning
- Trusted by 4000+ enterprises
Free Service
- machine learning guided Garbage collection log analysis tool.
  GCeasy has in-built intelligence to auto-detect problems in the JVM ⅋ Android
  GC logs and recommend solutions to it.
  - Solve Memory ⅋ GC problems in seconds
  - Get JVM Heap settings recommendations
  - Machine Learning Algorithms
  - Trusted by 4,000+ enterprises
  - Free
  - A perfect DevOps tool!
    Made by the developers, for the developers
libperfagent (perf agent)
Extracted from:
  "Apache Spark @Scale: A  production use case"
  ...  Tools we used to find performance bottleneck
  - Spark Linux Perf/Flame Graph support: Although the two tools 
    above are very handy, they do not provide an aggregated view of CPU 
    profiling for the job running across hundreds of machines at the same 
    time. On a per-job basis, Bºwe added support for enabling Perf º
  Bºprofiling (via libperfagent for Java symbols) and can customize the º
  Bºduration/frequency of sampling. The profiling samples are aggregatedº
  Bºand displayed as a Flame Graph across the executors using our       º
  Bºinternal metrics collection framework.                              º
Uber JVM Profiler: Tracing at scale
Our JVM Profiler supports a variety of use cases, most notably making 
it possible to instrument arbitrary Java code. Using a simple 
configuration change, the JVM Profiler can attach to each executor in 
a Spark application and collect Java method runtime metrics. Below, 
we touch on some of these use cases:
- Right-size executor: We use memory metrics from the JVM Profiler 
  to track actual memory usage for each executor so we can set the 
  proper value for the Spark “executor-memory” argument.
- Monitor HDFS NameNode RPC latency: We profile methods on the 
  class org.apache.hadoop.hdfs...ClientNamenodeProtocolTranslatorPB 
  in a Spark application and identify long latencies on NameNode calls. 
  We monitor more than 50 thousand Spark applications each day with 
  several billions of such RPC calls.
- Monitor driver dropped events: We profile methods like 
  org.apache.spark.scheduler.LiveListenerBus.onDropEvent to trace 
  situations during which the Spark driver event queue becomes too long 
  and drops events.
- Trace data lineage: We profile file path arguments on the method 
org.apache.hadoop.hdfs...getBlockLocations and 

Uber JVM Profiler provides a Java Agent to collect various metrics 
and stacktraces for Hadoop/Spark JVM processes in a distributed way, 
for example, CPU/Memory/IO metrics.

Uber JVM Profiler also provides advanced profiling capabilities to 
trace arbitrary Java methods and arguments on the user code without 
user code change requirement. This feature could be used to trace 
HDFS name node call latency for each Spark application and identify 
bottleneck of name node. It could also trace the HDFS file paths each 
Spark application reads or writes and identify hot files for further 

This profiler is initially created to profile Spark applications 
which usually have dozens of or hundreds of processes/machines for a 
single application, so people could easily correlate metrics of these 
different processes/machines. It is also a generic Java Agent and 
could be used for any JVM process as well.
Concurrent Programming
External Links
- Youtube Concurrency Classes: 
@[https://www.youtube.com/watch?v=8yD0hHAz3cs&list=PLw8RQJQ8K1ySGcb3ZP66peK4Za0LKf728&index=4] [RºES langº]
1uSec Thread sync
- If caches are so in-sync with one another, why do we need volatiles at all in
  languages like Java?

  That’s a very complicated question that’s better answered elsewhere, but
  let me just drop one partial hint. Data that’s read into CPU registers, is
  not kept in sync with data in cache/memory. The software compiler makes all
  sorts of optimizations when it comes to loading data into registers, writing it
  back to the cache, and even reordering of instructions. This is all done
  assuming that the code will be run single-threaded. Hence why any data that is
  at risk of race-conditions, needs to be manually protected through concurrency
  algorithms and language constructs such as atomics and volatiles.

 ☞In the case of Java volatiles, part of the solution is to force all
  reads/writes to bypass the local registers, and immediately trigger cache
  reads/writes instead. As soon as the data is read/written to the L1 cache, the
  hardware-coherency protocol takes over and provides guaranteed coherency across
  all global threads. Thus ensuring that if multiple threads are reading/writing
  to the same variable, they are all kept in sync with one another. And this is
  how you can achieve inter-thread coordination in as little as 1ns.
Concurrency Basics
- Concurrency problems arise from the desire to use CPU resources more efficiently. Non concurrent
  applications (single threaded/single process) are complete Touring machines that can potentially
  solve any problem with enough time and memory. In practice having a CPU assigned to single thread
  will be very inneficient since the CPU will stand-by while the thread is waiting for input/output
  operations. Also, many algorithms allows to split processed data in isolated regions that can be
  processed in parallel by different CPU/CPU cores
- Concurrent tries to solve the problem of multiple independents CPUs or threads accesing share
  resources (memory)
- Locks is the simples concurrent primite to protect code or data from concurrent
  access in situations where there are many threads of execution. Locks can be classified like:
  | According to lock ussage:
  |    Cooperative   A thread is encouraged (but not forced) to cooperate with other
  |                  threads by adquiring a lock before accessing the associated data
  |    Mandatory     a thread trying to access an already locked resource will throw
  |                  an exception
  | _________________________________________________
  | According to lock rescheduing strategy:
  |    Blocking      The OS block the thread requesting the lock and rescheduled another thread
  |    Spinlock      The thread waits in a loop until the requested lock becomes available.
  |                  It's more efficient if threads are blocked for very short time (smaller than
  |                  the time needed by the OS to reschedule another thread into the current CPU)
  |                  It's inneficient if the lock is held for a long time since a CPU core is
  |                  waisted on the spinlock loop
  | _________________________________________________
  | According to granularity: (measure of the ammount of data the lock is protecting)
  |    Coarse        Protect large segments of data (few locks). Results in less lock overhead
  |                  for a single thread, but worse performance for many threads running concurrently
  |                  (most thread will be lock-contended waiting for share resource access)
  |    Fine          Protect small amounts of data. Require more lock instances reducing lock

- Locks require CPU atomic instructions for efficient implementations suchs as
    "test-and-set", "fetch-and-add", or "compare-and-swap", whether there are blocking
    (managed by the OS) or spinlocks (managed by the thread)
- Uniprocessors can just disable interruptions to implement locks, while multiprocessors
  using shared-memory will require complex hardware and/or software support
-  ºMonitors wrap mutex-locks with condition variables (container of threads waitingº
   ºfor certain condition)º. They are implemented as thread-safe classes
-ºObject providing Mutual exclusion of threads to shared resourcesº
- simplest form of synchronization:
  alternatives include:
  - reads and writes of volatile variables
    typically used in applications when one thread will
    be making changes to the variables and the others all reading or
    consumers of the data. If you have multiple threads making changes to
    the data it will be best to stick with synchronized block or use
    java.util.concurrent library package.
    (volatile is actually simpler than monitors, but not universal)
    Important Points on Volatile Variables:
    - Volatile variables areºnot cached in registers or in cachesº:
     ºAll read and writes are done in main memory, never done thread-locallyº
    - Example Ussage: status flags used in spin loops
    - Volatile keywordºguarantees visibility and orderingº
  - use of classes in the java.util.concurrent package
- Monitors also have the ability to wait(block a thread) for a certain condition
  to become true, and signal other threads that their condition has been met
-ºMonitors provide a mechanism for threads to temporarily give up exclusive access inº
 ºorder to wait for some condition to be met, before regaining exclusive access and  º
 ºresuming their taskº
- each java object can be used as a monitor.
- Methods/blocks of code requiring mutual exclusion must be explicitly marked with the
Oºsynchronized keywordº:
  - The synchronized statement computes a reference to an object;
    it then attempts to perform a lock action on that object's monitor and does not
    proceed further until the lock action has successfully completed.
    After the lock action has been performed, the body of the synchronized statement
    is executed. If execution of the body is ever completed, either normally or abruptly,
    an unlock action is automatically performed on that same monitor.
  - RºWARNº: The Java programming language neither prevents nor requires detection
    of deadlock conditions.
- Instead of explicit condition variables, each monitor(/object) is equipped with
  a single wait queue in addition to its entrance queue.
- All waiting is done on this singleOºwait queueº and allOºnotify/notifyAllº
  operations apply to this queue.

ºmonitorº   enter
 ┌───┬─────── │ ──┐   - Wait sets are manipulated solely and atomically
 │  notified  v   │     through the methods
 │ ─────→         │    ºObject.waitº     : move     running thread    → wait-queue
 │   │        O   │    ºObject.notifyº   : move     thread  wait-queue → enter-queue
 │ O │        O   │    ºObject.notifyAllº: move all threads wait-queue → enter-queue
 │ O ├─────── │ ──┴─┐   Interrupt??      : put thread into to monitor enter-queue
 │ O │        v     │
 │  ←──wait   O     │  - In timed-waits  : internal action removes thread to enter-queue?
 │   │     (Running │                      after at least milliseconds plus nanoseconds
 └───┤      thread) │  - Implementations are permitted (but discouraged),
     │              │    to perform "spurious wake-ups"
     │    leave     │
     └────── │ ─────┘  O = Thread (Instruction Pointer + Stack Pointer + ...?)


- Object allowing 1+ threads to wait until a 1+ operations are completed in other threads.

- A CountDownLatch is a versatile synchronization tool and can be 
  used for a number of purposes. A CountDownLatch initialized with a 
  count of one serves as a simple on/off latch, or gate: all threads 
  invoking await wait at the gate until it is opened by a thread 
  invoking countDown(). A CountDownLatch initialized to N can be used 
  to make one thread wait until N threads have completed some action, 
  or some action has been completed N times.

- A useful property of a CountDownLatch is that it doesn't require 
  that threads calling countDown wait for the count to reach zero 
  before proceeding, it simply prevents any thread from proceeding past 
  an await until all threads could pass.

- Sample usage: Here is a pair of classes in which a group of worker threads use two countdown latches:
  The first is a start signal that prevents any worker from proceeding until the driver is ready for them to proceed;
  The second is a completion signal that allows the driver to wait until all workers have completed.

  -  Ex. 1:
     class Driver { // ...

       class Worker implements Runnable {
         private final CountDownLatch OºstartSignalº;
         private final CountDownLatch BºdoneSignalº;
         Worker(CountDownLatch OºstartSignalº, CountDownLatch BºdoneSignalº) {
          Oºthis.startSignal = startSignalº;
          Bºthis.doneSignal  = doneSignalº;
         public void run() {
            try {
            BºdoneSignalº.countDown();   // ← Decrease count
            } catch (InterruptedException ex) {} // return;

         void doWork() { ... }

       void main() throws InterruptedException {
         CountDownLatch OºstartSignalº= newºCountDownLatch(1);º// ← initialized with a given count
         CountDownLatch BºdoneSignalº = newºCountDownLatch(N);º// ← "
                                                             // ← Consider alsoºCyclicBarrierº(reset after count),
         for (int i = 0; i ˂ thread_number ; ++i) {
           new Thread(
                new Worker(
       OºstartSignalº.countDown();      // ← Decrease count. count cannot be reset.
       BºdoneSignalº.await();           // ← block until current count reaches zero
                                            Thread is released. Any subsequent invocations
                                            return immediately.

  -  Ex. 2:
     - divide problem into N parts
     - describe each part with a Runnable executing a portion,
     - queue all Runnables to an Executor.
     - When all sub-parts are complete, coordinating-thread will "pass" through await.

     class Driver2 { // ...
       class WorkerRunnable implements Runnable {
         private final CountDownLatch OºdoneSignalº;
            CountDownLatch OºdoneSignalº, ...) {
          Oºthis.doneSignal = doneSignalº;

         public void run() {
            try {
            } catch (InterruptedException ex) {} // return;

         void doWork() { ... }
       void main() throws InterruptedException {
         CountDownLatchOºdoneSignalº= new CountDownLatch(N);
         Executor e = ...

         for (int i = 0; i ˂ N; ++i) // create and start threads
           e.execute(new WorkerRunnable(OºdoneSignalº, i));

       OºdoneSignalº.await();           // wait for all to finish
Scheduling: Runnables|Callables Executors
 ┌────────────┐ │ ┌───────────────┐  │
 │˂˂Runnable˃˃│ │ │˂˂Callable˂V˃˃˃│  │          ˂˂Executor˃˃
 │────────────│ │ │───────────────│  │               ↑
 │+run()      │ │ │+call()        │  │       ˂˂ExecutorService˃˃  ← managed collection of threads
 └────────────┘ │ └───────────────┘  │               ↑              available to execute tasks
       ^        │ (util.concurrent)  │    ┌──────────┴──────────────┐
       │        │                    │    │                         │
 ┌────────────┐ │ Allows to return   │ AbstractExecutorService  ˂˂ScheduledExecutorService˃˃
 │   Thread   │ │ a result/Exception │    ↑                         ↑
 │────────────│ │ to the thread      │ ThreadPoolExecutor           │
 │+run()      │ │ triggering the     │    ↑                         │
 │+start()    │ │ Callable           │ ScheduledThreadPoolExecutor ─┘
 │+sleep()    │ │                    │ ^^^^^^^^^^^^^^^^^^^^^^^^^^^
 │....        │ │                    │ Prefered to "old" java.util.timer:
 │────────────│ │                    │  - Timer can be sensitive to system clock changes
 └────────────┘ │                    │  - Timer has only one execution thread. Long-running
                                          task can delay other tasks. ScheduledThreadPoolExecutor
                                          can be configured with "N" threads.
                                        - Runtime exceptions thrown kill the Timer thread.
                                          ScheduledThreadExecutor catches Runtime Ex,
                                          and allows to handle them by overriding afterExecute
                                          method from ThreadPoolExecutor). Onley the Task throwing
                                          the exception will be canceled.

- ˂˂ExecutorService˃˃(Thread Pool) ussage:
  - Alt 1: use an implementation of the interface,
           such as ThreadPoolExecutor or ScheduledThreadPoolExecutor.
           an the ˂˂ExecutorService˃˃Instance.execute(/*Runnable */runnableInstance)
           to add a Runnable task to thread pool.
           -  Will execute the task at some time in the
              future in a new thread, in a thread pool, or
              in the calling thread, depending on the implementation
           + public     void      shutdown(); // Initiates orderly shutdown of pool
           + public ˂T˃ Future˂T˃ submit(Callable˂T˃ task); // schedule callable task for execution

  - Alt 2: Use factory methods in the 'Executors' class:
          - Write worker thread class implementing ˂˂Runnable˃˃ run()
          Executors.newFixedThreadPool(int numThreads)
          Executors.newCachedThreadPool(): // ← unbounded pool, with automatic reclamation
          Executors.newScheduledThreadPool(int size)

ºEx: CallableThreadPoolTestº                                                 ºEx: ThreadPoolTestº
 import java.util.concurrent.Callable;                                       │import java.util.concurrent.ExecutorService;
 import java.util.concurrent.*;                                              │import java.util.concurrent.Executors;
 ...                                    ┌──→public classºMyCallableThreadº   │...
 int numWorkers = 10;                   │  │    implements Callable˂String˃{ │ExecutorService pool =
 ExecutorService pool =                 │  │  MyCallableThread(int id)       │  Executors.newFixedThreadPool(10);
     Executors.newCachedThreadPool();   │  │  {                              │MyWorker[] workers =
ºMyCallableThread workers[]º =       ───┘  │    ...                          │  new MyWorker[numWorkers];
     new MyCallableThread[numWorkers];     │  }                              │for (int i = 0; i ˂ numWorkers; ++i)
 Future futures[] = new Future[numWorkers];│                                 │  pool.execute(new MyWorker(i+1));
                                           │  public String call() {         │pool.shutdown();
 for (int i = 0; i ˂ numWorkers; ++i) {    │    Thread.sleep(1000));
    workers[i] = new MyCallableThread(i+1);│    return ""+id;
    futures[i] = pool.submit(workers[i]);  │  }
 }                                         │}
 for (int i = 0; i ˂ numWorkers; ++i) {
   try {
       futures[i].get() + " ended");
   } catch (InterruptedException ex) {
   } catch (ExecutionException ex) {
ForkJoinPool (1.7+)
REF: @[http://tutorials.jenkov.com/java-util-concurrent/java-fork-and-join-forkjoinpool.html]
- java.util.concurrent.ForkJoinPool  Scheduler is similar to "ExecutorService"
   but ☞Bº"it makes it easy for tasks to recursively split  work into smaller ones"º

- Bulding blocks:
  -ºForkº: Task (=="thread") that "Splits itself" into smaller subtasks,
           executing concurrently.
  -ºJoinº: End children tasks (=="threads") and merge results (if any).

                              ┌Task04··· CPU1 ·········┐end
                ┌Task02(fork)º┤º                 ┌────º┴º────┐
                │             └Task05··· CPU2 ···┘end º^º    │
   Task01(fork)º┤º                                    ºjoin ˃├º Task ...CPU1 ...
           ^    │             ┌Task06··· CPU3 ······┐end ºvº │
           │    └Task03(fork)º┤º                    └────º┬º─┘
           │             ^    └Task07··· CPU4 ···········─┘
           │             │                                
           │             │                               
         - There is an overhead in forks and maintaing new threads.
           Forking makes sense only for long running tasks with 
           intensive use of CPU
         - Task01, 02, 03 ºwaitº for subtasks to finish execution.

BºCreating new ForkJoinPoolº
  ForkJoinPool BºforkJoinPoolº = new ForkJoinPool(4);
                                  Desired level of paralellism
                                 (Desired number of threads/CPUs)

BºSubmiting tasks to ForkJoinPool schedulerº
  (very similar to how it is done in the ExecutorService). 
   You can submit two types of tasks:
   - QºRecursiveActionº: task not returning any result.
   - GºRecursiveTask  º: task     returning a   result.

import java.util.ArrayList;                       │import java.util.ArrayList;
import java.util.List;                            │import java.util.List;
import java.util.concurrent.RecursiveAction;      │import java.util.concurrent.RecursiveTask;
// QºCreating new RecursiveActionº                │// GºCreating new RecursiveTaskº
public class QºMyRecursiveActionº                 │public class GºMyRecursiveTaskº
extends QºRecursiveActionº {                      │extends GºRecursiveTaskº˂Long˃ {
  private long workLoad = 0;                      │  private long workLoad = 0;
  public MyRecursiveAction(long workLoad) {       │
      this.workLoad = workLoad;                   │  public MyRecursiveTask(long workLoad) {
  }                                               │      this.workLoad = workLoad;
                                                  │  }
  @Override                                       │
  protected void compute() {                      │  protected Long compute() {
    if(this.workLoad ˂ 16) {                      │    if(this.workLoad ˂ 16) {
      // 16 is a ºTunnable Threshold parameterº   │      // 16 is a ºTunnable Threshold parameterº
      // Do workload in current thread            │      // Process work in current thread
      return                                      │      return workLoad * 3;
    }                                             │    }
    List˂MyRecursiveAction˃ subtasks =            │    List˂MyRecursiveTask˃ subtasks =              
       Arrays.asList                              │       Arrays.asList
       ( new MyRecursiveAction(this.workLoad / 2),│       ( new MyRecursiveTask(this.workLoad / 2),
         new MyRecursiveAction(this.workLoad / 2) │         new MyRecursiveTask(this.workLoad / 2) 
       );                                         │       );
    for(RecursiveAction subtask : subtasks)       │    for(MyRecursiveTask subtask : subtasks)
      subtaskOº.fork()º;                          │        subtaskOº.forkº();                        
      //       ^^^^^^^                            │        //       ^^^^^^^                       
      //   Oºwork split into subtasks to beº      │        //   Oºwork split into subtasks to beº               
      //   Oºscheduled for executionº             │        //   Oºscheduled for executionº
  }                                               │
                                                  │    long result = 0;
}                                                 │    for(MyRecursiveTask subtask : subtasks) {
                                                  │        result += subtaskOº.joinº(); 
                                                  │    }
                                                  │    return result;
                                                  │  }
 ºUSSAGE:º                                        │ºUSSAGE:º
QºMyRecursiveActionºmyRecursiveAction =           │GºMyRecursiveTaskºmyRecursiveTask =
     new MyRecursiveAction(24);                   │     new MyRecursiveTask(128);
                                                  │  long mergedResult = 
BºforkJoinPoolºOº.invoke(myRecursiveAction);º     │BºforkJoinPoolºOº.invoke(myRecursiveTask)º;
                                                  │  System.out.println("mergedResult = " + mergedResult);    

-RºForkJoinPool Detractorsº
  - @[http://coopsoft.com/ar/CalamityArticle.html]
Completable Future
(Java 8+)

          │               ┌·······(RPE Loop)··················┐   │
          │               ↓                                   ·   │
          │   ºInput º  Parse    Create      Parse  Create  → ... │
 │INPUT│  │  →ºThreadº:  Data → │Future│  →  Data →│Future│       │
 │DATA │  │    (RPEL)                                             │
          │                        │                  │           │
          │┌───────────────────────┘                  │           │
          ││ ┌────────────────────────────────────────┘           │
          ││ │ ºI/O   º   External  RºWait   º Handle    |Future| │
          ││ └→ºThreadº:  Request  →RºResposeº→Respose → .complete│
          ││                     ↓                                │
          ││   ºI/O   º          ·               ↑                │
          │└──→ºThreadº:         ·               ·                │
                                 ·               ·      
                                 ·               ·      
                                 ·               ·      
                             │REMOTE SYSTEM        │
                             │                     │
- A Future that may be explicitly completed (setting its value and status),
  and may be used as a CompletionStage, supporting dependent functions and
  actions that trigger upon its completion.
- When two or more threads attempt to complete, completeExceptionally, or
  cancel a CompletableFuture, only one of them succeeds.

ºBarriers (OºallOf*):
  CompletableFuture˂Void˃[] future_list
    = new CompletableFuture[list.size()];
  int idx=0;
  log.info("Connecting plugins ...");
  for (Object el : list) {
    final CompletableFuture˂Void˃
      connectFuture = new CompletableFuture˂˃();
    The async Method at some moment must call the
  return CompletableFuture. OºallOfº(future_list);

Extracted from:
In asynchronous computation, actions are represented as callbacks, 
handling errors might occur at any step.

- Future: (Java 5+)
  -Represent an asynchronous computation
- CompletableFuture: (Java 8+)
  - Extends Future with methods to combine and handle errors
  - Extends the CompletionStage interface
    - Contract for an asynchronous computation step that
      can be combined with other steps.
  - About 50 different methods for composing, combining, executing 
    async computation

Using CompletableFuture as a Simple Future (no-arg constructor)

In the example below we have a method that creates a 
CompletableFuture instance, then spins off some computation in 
another thread and returns the Future immediately.

  1  public Future˂String˃ calculateAsync() throws InterruptedException {
  2      Future˂String˃ result = new CompletableFuture˂˃();
  4      Executors.newCachedThreadPool().submit(() -˃ {
  5          Thread.sleep(500);
  6          completableFuture.complete("Hello");
  7          return null;
  8      });
  10     return completableFuture;
  11 }
  Line 2: Alternatively when the result of computation is known:
          Future result = CompletableFuture.

  Line 6: Alternatively completableFuture.cancel(false);
  Line 5: any other mechanism can be used to compute

  1 Future completableFuture = calculateAsync();
  3 // ...
  5 String result = completableFuture.get();
  6 assertEquals("Hello", result);

Line 5: get() blocks until .complete("...") is called in other thread
Line 5: get()can raise
           ExecutionException: error during computation
           InterruptedException: thread executing method interrupted

4. CompletableFuture with Encapsulated Computation Logic
   (runAsync -˂˂Runnable˃˃-, supplyAsync -˂˂Supplier˃˃-)

˂˂Supplier˃˃: generic functional interface with single method 
              (zero arguments, returns value)

  1 CompletableFuture˂String˃ future
  2   = CompletableFuture.supplyAsync(/*supplier lambda*/ () -˃ "Hello")
  3 .thenApply(/* "processor" lambda */ s -˃ s + " World") // ← returns CompletableFuture
  4 .thenAccept(/*consumer lambda */
  5    s -˃ System.out.println("Computation returned: " + s));
    Line 4: Alternatively (ignrore result)
    .thenRun(/*Runnable lambda*/ () -˃ System.out.println("Computation finished."));

5. Combining Futures (monadic design pattern in functional languages)

  1 CompletableFuture˂String˃ completableFuture
  2   = CompletableFuture.supplyAsync(() -˃ "Hello")
  3     .thenCompose(
  4           s -˃ CompletableFuture.supplyAsync(() -˃ s + " World"));
  5 assertEquals("Hello World", completableFuture.get());

6. Execute two independent Futures and do something with their results

  1 CompletableFuture future
  2   = CompletableFuture.supplyAsync(() -˃ "Hello")
  3     .thenCombine(CompletableFuture.supplyAsync(
  4       () -˃ " World"), (s1, s2) -˃ s1 + s2));
  6 assertEquals("Hello World", future.get());

(Simpler case - nothing to do with resulting value-)
  2   = CompletableFuture.supplyAsync(() -˃ "Hello")
  3   .thenAcceptBoth(CompletableFuture.supplyAsync(
  4      () -˃ " World"), (s1, s2) -˃ log(s1 + s2));

7. Running Multiple Futures in Parallel:
   -  wait for all to execute and process combined results

  1  CompletableFuture˂String˃ future1
  2    = CompletableFuture.supplyAsync(() -˃ "Hello");
  3  CompletableFuture˂String˃ future2
  4    = CompletableFuture.supplyAsync(() -˃ "Beautiful");
  5  CompletableFuture˂String˃ future3
  6    = CompletableFuture.supplyAsync(() -˃ "World");
  8  CompletableFuture˂Void˃ combinedFuture
  9    = CompletableFuture.allOf(future1, future2, future3);
  11 // ...
  13 combinedFuture.get();
  15 String combined = Stream.of(future1, future2, future3)
  16   .map(CompletableFuture::join)
  17   .collect(Collectors.joining(" "));
  18 assertEquals("Hello Beautiful World", combined);

    Line 16: join() is similar to get, but throws unchecked exception 
    if the Future does not complete normally.

8. Handling Errors

   Instead of catching an exception in a syntactic block, the 
CompletableFuture class allows you to handle it in a special handle 
method. This method receives two parameters: a result of a 
computation (if it finished successfully) and the exception thrown 
(if some computation step did not complete normally).

Capture async exception:

  1  CompletableFuture˂String˃ completableFuture
  2    =  CompletableFuture.supplyAsync(() -˃ {
  3        ... if(errorDetected)
  4               throw new RuntimeException("Computation error!");
  6        return "Hello ";
  7    })}).handle((s, t) -˃ s != null ? s : "Hello, Stranger!");
  9  assertEquals("Hello, Stranger!", completableFuture.get());
  1  completableFuture.completeExceptionally(
  2    new RuntimeException("Calculation failed!"));
  3  ...
  4  completableFuture.get(); // ExecutionException

9. Async Methods
 - The methods without the Async postfix run next execution stage 
   using a calling thread.

 - The Async method without the Executor argument runs a step using 
   the common fork/join pool implementation of Executor that is accessed 
   with the ForkJoinPool.commonPool() method.

 - The Async method with an Executor argument runs a step using the 
   passed Executor.

 Ex.: process result of computation with a Function instance
  1 CompletableFuture completableFuture
  2   = CompletableFuture.supplyAsync(() -˃ "Hello");
  4 CompletableFuture future = completableFuture
  5   .thenApplyAsync(s -˃ s + " World");
  7 assertEquals("Hello World", future.get());

    Line 5: under the hood the application of a function is wrapped 
    into a ForkJoinTask instance (for more information on the fork/join 
    framework, see the article ?Guide to the Fork/Join Framework in 
    Java?).  This allows to parallelize your computation even more and 
    use system resources more efficiently.
Guava ListenableFuture
- Concurrency is a hard problem, but it is significantly simplified by
  working with powerful and simple abstractions. To simplify matters,
  Guava extends the Future interface of the JDK with ListenableFuture.

- """We strongly advise that you always use ListenableFuture instead
  of Future in all of your code, because:
  - Most Futures methods require it.
  - It's easier than changing to ListenableFuture later.
  - Providers of utility methods won't need to provide Future and ListenableFuture
      variants of their methods.

Listenable vs
          ListenableFuture                           │               CompletableFuture
                                                     │ It is different from ListenableFuture in that it
                                                     │ can be completed from any thread that wants it to complete
ListenableFuture listenable = service.submit(...);   │ CompletableFuture completableFuture =
  Futures.addCallback(listenable,                    │     new CompletableFuture();
                      new FutureCallback˂Object˃() { │ completableFuture.whenComplete(new BiConsumer() {
    @Override                                        │   @Override
    public void onSuccess(Object o) {                │   public void accept(Object o, Object o2) {
        //handle on success                          │       //handle complete
    }                                                │   }
                                                     │ }); // complete the task
    @Override                                        │ completableFuture.complete(new Object())
    public void onFailure(Throwable throwable) {     │
       //handle on failure                           │ When a thread calls complete on the task,
    }                                                │ the value received from a call to get() is
  })                                                 │ set with the parameter value if the task is
                                                     │ not already completed.

  ..."CompletableFuture is dangerous because it exposes ºcompeteº 
  ..."CompletableFuture would have been good if it extended Future 
     and did not expore toCompletableFuture,... and they could have named 
     it something meaningful like ChainableFuture "
DragonWell JDK with Coroutine Support
REF: @[https://www.infoq.com/news/2021/01/adoptopenjdk-welcomes-dragonwell/]
- AdoptOpenJDK and Alibaba announced that the Dragonwell JDK will be 
  built, tested, and distributed using AdoptOpenJDK's infrastructure.
  ...  Another interesting feature is the Wisp2 coroutine support.
     BºWips2  maps Java threads to coroutines instead of kernel-level threads:º.
       Many coroutines can be scheduled on a small number of core lines,
       reducing scheduling overhead.
     Wisp2 engine is similar in some respects to the aims of Project Loom
     but (unlike Loom) Bºit works out of the box on existing code by enablingº
   Bºit with these Java arguments:º
     $ java -XX:+UnlockExperimentalVMOptions -XX:+UseWisp2
    I/O intensive applications where tasks are blocked on events and 
  then scheduled can benefit from the coroutine support. On the other 
  side, RºCPU intensive applications will probably not benefit from º

Loom Project: Ligthweight threads

  - make concurrency simple(r) again!

- Threads, provided by Java from its first day, are a convenient concurrency
Rºputting aside the separate question of communication among threads
Rºwhich is being supplanted by less convenient abstractions because theirº
Rºcurrent implementation as OS kernel threads is insufficient for meetingº
Rºmodern demands, and wasteful in computing resources that are particularlyº
Rºvaluable in the cloud.º

- Project Loom will introduce BºFIBERS:
  - lightweight, JVM managed, efficient threadsº,
  - A fiber is composed of:
    -Gº1 schedulerº   : Already in place for Java through the excellent scheduler
    -Rº1 continuationº: To be implemented in Loom.

  The overhead of fibers is higher but still very low even when 
  compared to async and monadic APIs, which have the disadvantage of 
  introducing a cumbersome, infectious programming style and don’t 
  interoperate with imperative control flow constructs built into a 
  So aren’t fibers generators or async/awaits?
  No, as we have seen fibers are real threads: namely a continuation 
  plus a scheduler. Generators and async/awaits are implemented with 
  continuations (often a more limited form of continuation called 
  stackless, which can only capture a single stack frame), but those 
  continuations don’t have a scheduler, and are therefore not threads.


Ron Pressler discusses and compares the various techniques of dealing with concurrency
and IO in both:
- pure functional (monads, affine types)
- imperative      (threads, continuations, monads, async/await)
and shows why delimited continuations are a great fit for the imperative style.

Ron Pressler is the technical lead for Project Loom, which aims to add delimited
continuations, fibers and tail-calls to the JVM

Java Fibers
- fast threads for java and Kotlin

NOTE: To be superseded by Prj. Loom?
Extracted from @[https://github.com/puniverse/quasar/issues/305]
My understanding is that Ron is currently busy working for/with Oracle on
project Loom which should bring "native" Fiber/lightweight continuation
support directly into JVM without the need of auxiliary library like Quasar.
fast Inter-thread communication
- The story begins with a simple idea: create a developer friendly, 
  simple and lightweight inter-thread communication framework without 
  using any locks, synchronizers, semaphores, waits, notifies; and no 
  queues, messages, events or any other concurrency specific words or 
  Just get POJOs communicating behind plain old Java interfaces.
External Links
Spring DONT's!!!
- If a interface has a single implementation and is going
  to be instantiated just once in a single line of code,
  do NOT use Spring dependency injection. 
  - All static compiler safety meassures are lost, translating
    to runtime dangerous checks.
  - Use injection just when you have the intention to allow
    complex interchangable implementations or spring boot
    simplifies code, never when code get more complex and
  - Ex: A utility class with static methods is preferred to
    an injected spring bean  doing that same functionality.

Annotations Quick Sheet
@[https://www.javagists.com/spring-boot-cheatsheet]  ← TODO: Testing annotations,


    ANNOTATION      DESCRIPTION                                  LEVEL
                  |                                            |C|F|C|M|P
                  |                                            |L|I|O|E|A
                  |                                            |A|E|N|T|R
                  |                                            |S|L|S|H|A
                  |                                            |S|D|T|O|M
                  |                                            | | |R|D|S
                  |                                            | | |U| |
                  |                                            | | |C| |
   º@Autowired    º| "autowired by type", used to inject object |  x x x
                   | dependency implicitly .                    |
                   | - No need to be public.                    |
   º@Configurable º|inject properties of domain objects.        |x
                   |Types whose properties are injected without |
                   |being instantiated by Spring                |
   º@Qualifier    º| used to create more than one bean of the   |  x x x
                   | same type and wire only one of the types   |
                   | with a property, providing greater control |
                   | on the dependency injection process.       |
                   | - can be used with @Autowired annotation.  |
   º@Required     º|mark mandatory class members.               |  x x x
   º@ComponentScanº|Trigger scanning of package for the         |x
                   |@Configuration clases.                      |
   º@Configurationº|ºused on classes that define beans.º        |x
   º@Bean         º|tag a method ºbean producerº which will be  |      x
                   |mananged by the Spring container.           |
   º@Lazy         º| Init bean/component on demand              |x     x
   º@Value        º|used to inject values into a bean's         |  x x x
                   |attribute from a property file, indicating  |
                   |a default value expression.                 |
   º@Import       º|                                            |
   º@DependsOn    º|                                            |

BºSPRING FRAMEWORK ANNOTATIONSº ANNOTATION DESCRIPTION LEVEL | |C|F|C|M|P | |L|I|O|E|A | |A|E|N|T|R | |S|L|S|H|A | |S|D|T|O|M | | | |R|D|S | | | |U| | | | | |C| | ------------------------------------------------------ º@SpringBootApplicationº | |tag for main class for S Boot project. | |tagged class must be present in base path, | | triggering the scan for sub-packages | ------------------------------------------------------ º@EnableAutoConfigurationº | ------------------------------------------------------ º@Controllerº |Allows detection of component classes in | |the class path automatically and register | |bean definitions for the classes | |automatically. | ------------------------------------------------------ º@RestControllerº | |tag controller as RESTful (behaviour) that | |will behave as resources. | ------------------------------------------------------ º@ResponseBodyº|automatically convert returned object to a | |response body. | ------------------------------------------------------ º@RequestMappingº | |map requests URI to handler class/method | ------------------------------------------------------ º@RequestParamº|bind req.param to method param in controller| ------------------------------------------------------ º@PathVariableº|bind placeholder from URI to method.param |
IoC Summary
  - org.springframework.beans
    - @[https://docs.spring.io/spring/docs/5.0.0.RELEASE/spring-framework-reference/core.html#beans-definition]
    - Objects managed by Spring IoC
    - created with the configuration metadata.
    - Represented as ºBeanDefinition objectsº containing:
      - essentially "a recipe for creating one or more objects".
      - package-qualified class name: typically the actual implementation class.
      - behavioral configuration elements: scope, lifecycle callbacks,...)
      - References to other dependencies (or "collaborators")
      - Custom settings (setters).

    BºBest Patternsº
    - Bean metadata need to be registered as early as possible.
    RºWARN:º registration of new beans at runtime (live access to
      factory) is not officially supported and may lead to concurrent
      access exceptions  and/or inconsistent state in the bean container.

  - org.springframework.context.BeanFactory (Interace)
    - provides advanced config.mechanism for "any" type of object.
    └ org.springframework.context.ApplicationContext (Interface)
      - extends BeanFactory with "Enterprise Features"
      - represents the IoC container
      - easier integration with Spring's AOP features
      - message resource handling (for use in i18n)
      - event publication
      - application-layer specific contexts such as
      └ ClassPathXmlApplicationContext
      · ºApplicationContextº context =
      ·      new ClassPathXmlApplicationContext ( // Alt 1:
      ·       "services.xml", "daos.xml");
      ·       ^^^^^^^^^^^^^^^^^^^^^^^^^^
      └ FileSystemApplicationContext
      └ ...

  MyBeanClass myBean = contextº
         .getBeanº("idBeanDef", beanClass.class);

BºSpring History:º
  Spring 1.0+ → Spring 2.5+      → Spring 3.0+
  XML           Annotation-based   Java-based config

BºBean Metadataº
  -ºpackage-qualified class nameº
  -ºname º: (unique) "id" or (aliased) "name" in xml.
    -ºsingletonº per Spring IoC container (default)
    -ºprototypeº single bean definition to any number of object instances.
    - In web-aware ApplicationContext next scopes are available:
      - ºrequest    º: single bean for lifecycle of HTTP request
      - ºsession    º: single bean for lifecycle of HTTP Session.
      - ºapplicationº: Single bean for lifecycle of ServletContext.
      - ºwebsocketº  : single bean for lifecycle of WebSocket.

  -ºconstructor argsº: (Prefered to properties -setters-):
       ˂bean id="id01" class="x.y.Class01"/˃
       ˂bean id="id02" class="x.y.Class02"/˃

       ˂bean id="instance03" class="x.y.Class03"˃
         ˂constructor-arg ref="id01"/˃                   ← By class
         ˂constructor-arg type="int" value="3320"/˃      ← by type
         ˂constructor-arg name="year" value="2020"/˃     ← by param name
         ˂constructor-arg index="4" value="Hello World"/˃← by param index

         Note:- Bº˂idref˃ is prefered to property with value attribute (fails-faster)º
              - bean ºdepends-onº attribute can force initialization (and destruction) order

    - Let Spring resolve dependencies("collaborators") of a bean
      by inspecting the contents of the ApplicationContext.
      ("ref") autowire values:
      - no     : ref used, not recommended for complex     configs
      - byName : IoC looks for bean with matching name
      - byType : (in setter or constructor args)
                 autowired if exactly one bean of the property type exists in container.
                 throws error if more than one found
               RºWARNª: set to null if zero found.

      ☞ Note: "default-autowire-candidates" attribute in beans tag can limit autowire
              candidate globally with a CSV list of candidates: (*Repository,*Security,*Logging)

  -ºlazy-initº : false: force resolution and instantiate at startup.(default,recomended)
                 true : use for "big objects" to save memory.

- While weird and not recomeneded, external (to the container) objects can
  be registered like:
  BeanFactory bFactImpl = context.getBeanFactory()
                                 returns DefaultListableBeanFactory impl

BºMethod injectionº
  - Suppose singleton A needs to use ºnon-singletonº bean B
   ºon each method invocation on Aº

  - Alternative A: (RºDiscouraged, tied to Spring internalsº)
    beans A implements ˂˂ApplicationContextAware˃˃.
    getBean("B") to container requesting a
    (typically new) bean B instance.

  - Alternative B: Method Injection
    containers overrides managed bean A
    - class and method cannot be final
    - lookup methods won't work with factory methods
      and in particular not with @Bean methods in configuration
    - classes, since the container is not in charge of creating the instance
      in that case and therefore cannot create a runtime-generated subclass
      on the fly.
  - º@NonNullº       ← forces param|return value|field to be NON-null
  - º@Nullableº      ← allows param|return value|field to be     null
  - º@NonNullApiº    ← forces param|return value       to be NON-null at package level
  - º@NonNullFieldsº ← forces                    field to be NON-null at package level
      org.springframework.lang. package

  - Null and ºempty stringº values rules
    - empty arguments for properties,... convert to "" empty String.
    - ˂null/˃ element handles null values. Ex
      ˂property name="email"˃ ˂null/˃ ˂/property˃ ← email = null
      ˂property name="email"˃         ˂/property˃ ← email = ""

Spring+JPA+JWT summary
BºSwagger (OpenAPI) º [config] {{
- file: com/myComp/openApi/SwaggerConfig.java  {{
    package com.myComp.swagger;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;

    import springfox.documentation.builders.ApiInfoBuilder;
    import springfox.documentation.builders.PathSelectors;
    import springfox.documentation.builders.RequestHandlerSelectors;
    import springfox.documentation.service.ApiInfo;
    import springfox.documentation.service.ApiKey;
    import springfox.documentation.service.Contact;
    import springfox.documentation.spi.DocumentationType;
    import springfox.documentation.spring.web.plugins.Docket;
    import springfox.documentation.swagger2.annotations.EnableSwagger2;
  Bº@Configurationº                                                           ← spring core: mark class as having
                                                                                @Bean def. methods so that Spring
                                                                                container process it generating 
                                                                                app. Beans.
    public class SwaggerConfig {
      public Docket api() {
        return new Docket(DocumentationType.SWAGGER_2)
      private ApiInfo apiEndPointsInfo() {
        return new ApiInfoBuilder()
          .contact(new Contact(SWAGGER_CONTACT_NAME, SWAGGER_CONTACT_URL, "dev@myComp.com"))
      private ApiKey apiKey() {
        return new ApiKey(AUTHKEY, AUTHORIZATION, HEADER);

BºJPA Configº

TODO: https://www.logicbig.com/tutorials/spring-framework/spring-data/specifications.html

- file: com/myComp/jpa/LocalDateTimeAttributeConverter.java    [persistence][jpa]
  package com.myComp.jpa;
  import java.sql.Timestamp;                                   // ← SQL         friendly type
  import java.time.LocalDateTime;                              // ← Application friendly type
  import javax.persistence.AttributeConverter;
  import javax.persistence.Converter;
  @Converter(autoApply = true)
  public class LocalDateTimeAttributeConverter                 // ← Fix impedance problems DDBB / App types
  implements AttributeConverter˂LocalDateTime, Timestamp˃ {
      public Timestamp convertToDatabaseColumn(                //   Java type → DDBB column
        LocalDateTime locDateTime) {
          return Timestamp.valueOf(locDateTime);
      public LocalDateTime convertToEntityAttribute(           //   Java type ← DDBB column
        Timestamp sqlTimestamp) {
          return sqlTimestamp.toLocalDateTime();

- file: com/myComp/jpa/Entity1Repository.java                    [persistence][jpa]
  import org.springframework.data.jpa.repository.JpaRepository;
  import org.springframework.data.jpa.repository.Query;
  import org.springframework.stereotype.Repository;
  public interface Entity1Repository extends JpaRepository˂Entity1, Long˃{
    @Query(value = 
          " SELECT * "
        + " FROM entity1 en1 "
        +   " JOIN entity2 en2 ON en2.id = en1.entity2_id "
        +   " JOIN entity3 en3 ON en3.id = en2.entity3_id "
        + "WHERE en2column2 = :col2Value "
        + "AND en3.id = :Entity3ID ", nativeQuery = true)
      Entity1 query1(Long entity3Id, String col2Value);

  @Query(value = 
        " SELECT en2.* "
      + " FROM entity2 en2 " 
      + " JOIN entity3 en3 ON en2.entity3_id = en3.id " 
      + " WHERE en3.col5 = :column5Value", nativeQuery = true)
  public List˂SmartContractCondition˃ linkOrganization(String column5Value);


- file: com/myComp/jpa/Entity1Repository.java [persistence][jpa]
  import java.io.Serializable;
  import javax.persistence.*;
  import com.fasterxml.jackson.annotation.JsonIgnore;
  @Table(name = "entity1")
  public class Entity1 implements Serializable {
    private static final long serialVersionUID = 1L;
    public Entity1() { }                                 // ← Empty constructor required by JPA
    // ---- id ----
    @JsonIgnore                                          // ← Class can also be used for Json
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    @Column(name="id", length=20)
    private Long id;
    public void setId(Long id) { this.id = id; }
    public Long getId() { return id; }

    // -------- column2 --------
    @Column(name = "column2")
    private LocalDateTime column2;
    // -------- entity2 --------
    @ManyToOne(fetch = FetchType.LAZY)
      name = "entity2_id",
      referencedColumnName = "id",
      insertable = false,
      updatable = false)
    private Entity2 entity2;

    // -------- entity3 --------
      name = "entity3_id",
      referencedColumnName = "id"
    private List˂Entity3˃ entity3List;
    public List˂Entity3˃ getEntity3List() { return entity3List; }
    public void setEntity3List(List˂Entity3˃ _entity3List;) {
      this.entity3List = _entity3List;
    public void addEntity3(Entity3 _entity3) {
      getEntity3List()                              // ← WARN: get force load from DBs (vs entity3List)
    public void removeEntity3(Entity3 _entity3) {
      getEntity3List()                              // ← WARN: get force load from DBs (vs entity3List)

    // -------- entity4 --------
      fetch = FetchType.LAZY,
      cascade = CascadeType.ALL)                    // ← ALL | PERSIST | ...?
    private List˂Entity4˃ entity4List;

    // -------- entity5 --------
    @Formula(value = 
          "(SELECT COUNT(1) "
        + "FROM entity5 en5 "
        +   "JOIN entity1 en1 ON en1.id = en5.entity1_id "
        + "WHERE en5.id = id AND en5.bCondition = 1)")
    private Long entity5List;
    @Transient public Long getEntity5List() { return entity5List; }
    @Transient public void setEntity5List(Long entity5List) { this.entity5List = entity5List; }

    public int hashCode() {
      return Objects.hash(column2, column3, ...);
    public boolean equals(Object obj) {
      Entity1 that = (Entity1) obj;
      return Objects.equals(column1, that.column1)
          && Objects.equals(column2, that.column2)
          && ... ;
- file: com/myComp/jpa/Entity2.java                   file: com/myComp/jpa/Entity2.java   [persistence]
     @Entity                                        │ @Entity
     @Table(name = "entity2")                       │ @Table(name = "smart_contract_condition")
     public class Entity2 implements Serializable { │ public class Entity3 implements Serializable {
       private static final                         │   private static final long serialVersionUID = 1L;
          long serialVersionUID = 1L;               │ 
                                                    │   @Id
       @Id                                          │   @GeneratedValue(
       @GeneratedValue(                             │     strategy = GenerationType.IDENTITY)
         strategy = GenerationType.IDENTITY)        │   private Long id;
       private Long id;                             │ 
                                                    │   @Column(name = "min")
       ...                                          │   private double min;
     }                                              │ 
                                                    │   @Column(name = "min_time")
                                                    │   private int minTime;
                                                    │   @JsonIgnore
                                                    │   @ManyToOne(fetch = FetchType.LAZY)
                                                    │   @JoinColumn(
                                                    │     name = "entity1_id",
                                                    │     referencedColumnName = "id",
                                                    │     insertable = false,
                                                    │     updatable = false)
                                                    │   private Entity1 entity1;
                                                    │   public Entity3() { }
                                                    │   ...
                                                    │  }
BºJWT (OAuth2)Support:º {{{                                    [aaa], [oauth], [cryptography]
  public class OAuth2Const {
      static final String
          HEADER_AUTH_KEY     = "Authorization",
          TOKEN_BEARER_PREFIX = "Bearer ",
          AUTHKEY             = "authkey",
          AUTHORIZATION       = "Authorization",
          HEADER              = "header",
          BEARER              = "Bearer ",
          LOGIN_URL           = "/api/v1/user/login";

      static final long
          MILISECS_TOKEN_EXPIRATION = 60*60*4*1000;

- com/myComp/security/JWTAuthorizationFilter.java

 ºimport javax.servlet.FilterChain;º
  import javax.servlet.ServletException;
  import javax.servlet.http.HttpServletRequest;
  import javax.servlet.http.HttpServletResponse;
  import javax.xml.bind.DatatypeConverter;
  import org.springframework.security.authentication.AuthenticationManager;
  import org.springframework.security.authentication.UsernamePasswordAuthenticationToken;
  import org.springframework.security.core.context.SecurityContextHolder;
  import org.springframework.security.web.authentication.www.BasicAuthenticationFilter;
  import io.jsonwebtoken.Jwts;
  public class JWTAuthorizationFilter 
  extends BasicAuthenticationFilter {                                      ← Implemented as Filter
       public JWTAuthorizationFilter(AuthenticationManager authManager) {
    protected void doFilterInternal(
        HttpServletRequest req, HttpServletResponse res, FilterChain chain)
        throws IOException, ServletException {
      String header = req.getHeader(HEADER_AUTH_KEY);
      if (header == null || !header.startsWith(TOKEN_BEARER_PREFIX)) {
        chain.doFilter(req, res);
      final UsernamePasswordAuthenticationToken 
          authentication = _getAuth(req);
      chain.doFilter(req, res);
        _getAuth(HttpServletRequest request) {
      final String token = request.getHeader(HEADER_AUTH_KEY);
      if (token == null) { return null; }                                 ← Do not throw to allow next Filters
      String user = Jwts.parser()
              .parseBase64Binary("32bytes/64hex dig.secret key")))
            .parseClaimsJws(token.replace(TOKEN_BEARER_PREFIX, ""))
      if (user == null) { return null; }                                  ← Do not throw to allow next Filters
      return new 
           (user, null, new ArrayList˂˃());

- file: com/myComp/security/JWTAuthorizationFilter.java
  package com.myComp.security;
  import org.springframework.beans.factory.annotation.Autowired;
  import org.springframework.context.annotation.Bean;
  import org.springframework.context.annotation.Configuration;
  import org.springframework.http.HttpMethod;
  import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder;
  import org.springframework.security.config.annotation.web.builders.HttpSecurity;
  import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
  import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;
  import org.springframework.security.config.http.SessionCreationPolicy;
  import org.springframework.security.core.userdetails.UserDetailsService;
  import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder;
  import org.springframework.web.cors.CorsConfiguration;
  import org.springframework.web.cors.CorsConfigurationSource;
  import org.springframework.web.cors.UrlBasedCorsConfigurationSource;
  public class WebSecurity extends WebSecurityConfigurerAdapter {         ← AAA
    @Autowired private 
    UserDetailsService userDetailsService;
    public BCryptPasswordEncoder bCryptPasswordEncoder() {
      return new BCryptPasswordEncoder();
    protected void
      configure(HttpSecurity httpSecurity) throws Exception {
      if (true) {
          .antMatchers(HttpMethod.POST, LOGIN_URL).permitAll()
          .antMatchers(HttpMethod.POST, CREATE_URL).permitAll()
              new JWTAuthorizationFilter(
              ) );
      if (false) { // example conf. 2
              .antMatchers(HttpMethod.GET   ,"/api/v1/service1").permitAll()
              .antMatchers(HttpMethod.POST  ,"/api/v1/service1").hasRole("ADMIN")
              .antMatchers(HttpMethod.PUT   ,"/api/v1/service1").hasRole("ADMIN")

    public void
    configure(AuthenticationManagerBuilder auth) {
         .passwordEncoder(bCryptPasswordEncoder());                       ← Algorithm used for passwords
    CorsConfigurationSource corsConfigurationSource() {                   ← Cross-Origin Resource Sharing
      final UrlBasedCorsConfigurationSource                                 (CORS) SETUP
        source = new UrlBasedCorsConfigurationSource();
            "/**",                                                        ← Any source
             new CorsConfiguration()
                  .applyPermitDefaultValues() );
      return source;


BºAuthentication, Authorization and Access (AAA)º
- file: com/myComp/security/AAAService.java                               [aaa][oauth]
  public interface AAAService {
  	public String getUserByToken(String token);
  	public String createTokenForUsername(String userName);

    Boolean checkLoginOrThrow(String username, String password);
    AAAUserEntity findByUsernameOrThrow(String username);

- file: com/myComp/security/AAAServiceImpl.java
  public class AAAServiceImpl implements AAAService {
    private Logger logger = LoggerFactory.getLogger(this.getClass());
    @Autowired private AAAUserRepository userRepository;
    private String someConfigParam;                               // ← Some config param injected by Spring [configuration]
    public String getUserByToken(String token) {                  // ← Used by different controllers to
      return Jwts.parser()                                        //    fetch user from Header token
                "32bytes/64hex dig.secret key")))
            token.replace(TOKEN_BEARER_PREFIX, ""))
    public String createTokenForUsername(String userName) {       // ← Used to create User session JWT token
      final SignatureAlgorithm                                    //   upon successful login  
         signatureAlgorithm = SignatureAlgorithm.HS256;
      final byte[] apiKeySecretBytes = 
           .parseBase64Binary("32bytes/64hex dig.secret key"));
      final Key signingKey = new SecretKeySpec(
             signatureAlgorithm.getJcaName() );
      return BEARER + 
          Jwts.builder().setIssuedAt(new Date())
                 new Date( System.currentTimeMillis()
                         + MILISECS_TOKEN_EXPIRATION) )
              .signWith( signingKey,                              // ← Setup priv.key for JWT signatures
    public Boolean checkLoginOrThrow(String userName, String password) {
      // TODO:(0) send hash of user+pass?
      if (userRepository.findByLogin(userName, password) != 1) {
         throw new CustomSecurityException(...);
    public AAAUserEntity findByUsernameOrThrow(String username) {
      try {
        return userRepository.findByUsername(username);
      } catch (Exception e) {
        throw new CustomSecurityException(...);
- file: com/myComp/config/CustomControllerAdvice.java [qa] [error_control]
    @RequestMapping(value = "/api/v1/AAA")
    @Api(tags = "aaa,auditing,...")
    public class AAAController {
      final Logger logger = LoggerFactory.getLogger(this.getClass());
      @Autowired private AAAService aaaService;
      @ApiOperation(value = "Login with an user")
      @PostMapping( value = "/login", produces = "application/json" )
      public ResponseEntity˂String /*(Token)*/˃ 
        login(@RequestBody UserPassDTO login) {
          aaaService.loginUser(login.getUsername(), login.getPassword());
        HttpStatus responseStatus = exists ? HttpStatus.OK
        logger.info("login success for {}", login.getUsername()) ;
        final String token = AAAService.createTokenForUsername(login.getUsername());          // [oauth] Create token upon successful login
        return new ResponseEntity˂˃(token, responseStatus);
      @ApiOperation(value = "Get User")
      @GetMapping(value = "/getDetail", produces = "application/json")
      public ResponseEntity˂AAAUserEntity˃
        getUserDetail(@RequestParam(value = "username") String username) {
        AAAUserEntity user = userService.findByUsernameOrThrow(username);
        return new ResponseEntity˂˃(user, HttpStatus.OK); 
- file: com/myComp/App.java                                      [configuration][devops]
BºMAIN (entry point to Spring Boot app)º

  package com.myComp;

  import org.springframework.boot.SpringApplication;
  import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
  import org.springframework.boot.autoconfigure.SpringBootApplication;
  import org.springframework.context.annotation.ComponentScan;
  import org.springframework.context.annotation.Configuration;
  @ComponentScan({ "com.myComp"})                  // ←   Package to scan for Spring components
  public class App {                               //
    public static void main(String[] args) {
      SpringApplication.run(App.class, args);

- file: com/myComp/config/CustomControllerAdvice.java [qa] [error_control]

    package com.myComp.config;
    import java.net.HttpURLConnection;
    import javax.validation.ConstraintViolation;
    import javax.validation.ConstraintViolationException;
    import org.hibernate.exception.JDBCConnectionException;
    import org.springframework.http.*;
    import org.springframework.validation.FieldError;
    import org.springframework.validation.ObjectError;
    import org.springframework.web.bind.MethodArgumentNotValidException;
    import org.springframework.web.bind.MissingServletRequestParameterException;
    import org.springframework.web.bind.annotation.ControllerAdvice;
    import org.springframework.web.bind.annotation.ExceptionHandler;
    import org.springframework.web.context.request.WebRequest;
    import org.springframework.web.method.annotation.MethodArgumentTypeMismatchException;
    import org.springframework.web.servlet.mvc.method.annotation.ResponseEntityExceptionHandler;
    // TODO:(qa) Review. 
    @ControllerAdvice                                              // ← (Spring 3.2+): Bºhandle exceptions acrossº
    public class CustomControllerAdvice                            // Bºwhole applicationº (vs individual controller).
    extends ResponseEntityExceptionHandler {                       //   "Sort of" exception-interceptor thrown by
                                                                   //   methods annotated with @RequestMapping.

      @ExceptionHandler(xceptionClass01.class)                     //  ← Allows different exception handling by
      public ResponseEntity˂Object˃                                //    type (Recoverable, external, internal, 
      connectionException(final JDBCConnectionException e) {       //    ...)
        // log, notifications, ...
        CustomClientErrorNotification customErr = 
          new CustomClientErrorNotification(error_list,,..)
        return new ResponseEntity˂˃(
          HttpStatus.BAD_REQUEST );
      @Override protected ResponseEntity˂Object˃
        MethodArgumentNotValidException ex,
        HttpHeaders headers, HttpStatus status, WebRequest request) {
        final List˂String˃ error_list = new ArrayList˂˃();
        for (FieldError error : ex.getBindingResult().getFieldErrors()) {
          error_list.add(error.getField() + ": " + error.getDefaultMessage());
        CustomClientErrorNotification customErr = 
          new CustomClientErrorNotification(error_list,,..)
        return new ResponseEntity˂˃(customErr,  HttpStatus.BAD_REQUEST);
      @Override protected ResponseEntity˂Object˃
      handleMissingServletRequestParameter(...) { ... }
      @Override protected ResponseEntity˂Object˃
      handleConstraintViolation(...) { ... }
- file: com/myComp/config/ConfigurationCore.java [configuration]   // ← Main Config point. 
    package com.myComp.config;                                     //   (autoscan is another possibility)
    import org.springframework.beans.factory.annotation.Value;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    import org.springframework.web.client.RestTemplate;
    import org.springframework.web.filter.CharacterEncodingFilter;
    import org.web3j.protocol.admin.Admin;
    import org.web3j.protocol.Web3j;
    public class ConfigurationCore {
      @Bean public Service1 getService1() { return new Service1Impl(); }
      @Bean public Service2 getService2() { return new Service2Impl(); }
      @Bean public Service3 getService3() { return new Service3Impl(); }
      @Bean public Entity1Service 
      getEntity1Service() {  return new Entity1ServiceImpl(); }   // ← [persistence][JPA] 
      @Bean public Entity2Service 
      getEntity2Service() {  return new Entity2ServiceImpl(); }   // ← [persistence][JPA] 
      CharacterEncodingFilter characterEncodingFilter() {
        final CharacterEncodingFilter filter = 
            new CharacterEncodingFilter();
        return filter;
      @Bean AAAService getAAAService() {                      // ← [aaa]
        return new AAAServiceImpl();
- file: com/myComp/apirest/ControllerService1.java 
    package com.myComp.api.serviceZ.controller;
    import javax.transaction.Transactional;                          // ← [persistence][jpa][erro_control][qa]
    import javax.validation.Valid;
    import org.slf4j.Logger;
    import org.slf4j.LoggerFactory;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.http.HttpStatus;
    import org.springframework.http.ResponseEntity;
    import org.springframework.web.bind.annotation.PostMapping;
    import org.springframework.web.bind.annotation.RequestBody;
    import org.springframework.web.bind.annotation.RequestMapping;
    import org.springframework.web.bind.annotation.RequestParam;
    import org.springframework.web.bind.annotation.RestController;
    import io.swagger.annotations.Api;
    import io.swagger.annotations.ApiOperation;
    import io.swagger.annotations.ApiParam;
    @RequestMapping(value = "/api/v1/service1")
    @Api(tags = "service1,topic1,topic2")
    public class ControllerService1 {
      final Logger logger = LoggerFactory.getLogger(this.getClass());

      @Autowired private AAAService AAAService;           // [aaa]
      @Autowired Service1 service1;
      @Autowired Service2 service2;
      @Autowired Service3 service3;

      @Autowired Entity1Service entity1Service;               // Entity1Service1 uses Entity1Repository for queries
                                                              // Entity1 por inserts/deletes/...
      @ApiOperation(value = "human readable api summary")
         value = "/search/entity1",                           // ← Final URL /api/v1/service1/search/entity1
         produces = "application/json")
      public ResponseEntity
        ˂List˂Entity1˃˃ searchEntity1(
          @RequestBody CustomSearchRequest request)
          throws IllegalAccessException {                     // ← [aaa]
        final List˂Entity1˃ response = 
           service1.getEntity1ListQuery1(request);             // ← Note: Throw exception on service implementation
                                                                   if some error arise (vs returning null).
                                                                   Then configure CustomControllerAdvice
                                                                   to handle generic errors.
        return new ResponseEntity˂˃(response, HttpStatus.OK; );
      @Transactional                                             // ← [jpa] declaratively control TX boundaries on
                                                                 //   CDI managed beans and Java EE managed beans.
                                                                 //   (class or method level)
      @ApiOperation(value = "human readable api summary")
         value = "/entity1",
        produces = "application/json")
      public ResponseEntity˂˃ create(
         Entity1 jsonEntity1,
         @RequestParam(name = "param1", required = true) String param1,
         @RequestParam(name = "param2", required = true) String param2
      ) {
        final String username = AAAService.getUserByToken(token);  // [aaa]
        AAAService user = userService.findByUsernameOrThrow(username);
        return new ResponseEntity˂˃(HttpStatus.CREATED);

      @ApiOperation(value = "human readable api summary")
      @GetMapping(value = "/entity1/{entity1_id}")
      public ResponseEntity˂Entity1˃ getChartJson(
          @PathVariable int entity1_id,
          @RequestHeader (name="Authorization") String token,
          @RequestParam  (name = "startIndex", required = false) Long param1,
          @RequestParam  (name = "maxRows"   , required = false) Long param2)
      { ... }
- file: com/myComp/service/Service1.java 
    package com.myComp.api.serviceZ.service;
    import java.util.List;
    public interface Entity1Service {
      ˂List˂Entity1˃˃ getEntity1ListQuery1(CustomSearchRequest req);
      void            insert            (Entity1 entity);
- file: com/myComp/service/Service1Impl.java 
    package com.myComp.api.serviceZ.service;
    import javax.persistence.*;
    import javax.persistence.criteria.*;
    import javax.transaction.Transactional;
    public class Entity1ServiceImpl implements Entity1Service {
      @Autowired Entity1Repository entity1Repository;                   // [persistence][jpa]
      @Autowired Entity2Repository entity2Repository;
      @Autowired EntityManager em;                                      // [persistence][jpa]
      @Transactional                                                    // [persistence][jpa]
      public void insert(Entity1 entity1) {
        em.persist(entity1);                                            // INSERT INTO ... [persistence][jpa]
        // anything else (persist/update related entities, ...)
      public ˂List˂Entity1˃˃ getEntity1ListQuery1(CustomSearchRequest req) {
        final ˂List˂Entity1˃˃ entity1_list =  entity1Repository.query1(req.entity3Id, req.col2Value);
        return response;
      public ˂List˂Entity1˃˃ getEntity1ListQuery2(CustomSearchRequest req) {
        final CriteriaBuilder                  cb = em.getCriteriaBuilder();            // [persistence][jpa] TODO:
        final CriteriaQuery˂Entity1˃ entity1_list = cb.createQuery(Entity1.class);      // [persistence][jpa] TODO:
        final Root˂Entity1˃               smRoots = smartContracts                      // [persistence][jpa] TODO:
        final Predicate  p1 = cb.equal( ... ), // [TODO]
                         p2 = cb.equal( ... ); 
        final Predicate all = cb.and(p1, p2);
        TypedQuery˂Entity1˃ typedQuery = em.createQuery(entity1_list);
        return typedQuery.getResultList();
Spring Boot/Cloud Summary
Spring Cloud Configuration
- PRESETUP: 2+ APIs running independently has been developed

- Target: Bootstrap next micro-services:
  1) Configuration Server:
     - centralizes micro-services configuration.
       (Sort of "etcd" for Spring)

  2) Discovery Server: allow apps to find each other

  3) Gateway Server: reverse proxy encapsulating all
     independent micro-services in a single port.

   └ 1) Configuration Server HOW-TO:
   | └ Configuration Ser. Maven Deps:
   | · Configuration Server deps:
   | · ˂dependency˃
   | ·     ˂groupId˃org.springframework.cloud˂/groupId˃
   | ·     ˂artifactId˃spring-cloud-config-server˂/artifactId˃
   | · ˂/dependency˃
   | · ˂dependency˃
   | ·     ˂groupId˃org.springframework.cloud˂/groupId˃
   | ·     ˂artifactId˃spring-cloud-starter-eureka˂/artifactId˃
   | · ˂/dependency˃
   | · ˂dependency˃
   | ·      ˂groupId˃org.springframework.boot˂/groupId˃
   | ·      ˂artifactId˃spring-boot-starter-security˂/artifactId˃
   | · ˂/dependency˃
   | ·
   | └  Configuration Server IoT Setup:
   | ·    @SpringBootApplication
   | ·    @EnableConfigServer       ← Make Configuration Service discoverable
   | ·    @Enable       ← via EEurekaClienturoka client
   | ·    public class ConfigApplication {
   | ·      ...
   | ·    }
   | ·
   | └ Configuration Server Config. file:
   | · - application.properties:
   | ·   server.port=8081
   | ·   spring.application.name=config
   | ·   spring.cloud.config.server.git.uri=            ← set to real git path
   | ·          file:///${user.home}/application-config
   | ·   eureka.client.region=default                    
   | ·   eureka.client.registryFetchIntervalSeconds=5
   | ·   eureka.client.serviceUrl.defaultZone=
   | ·          =http://discUser:discPassword@localhost:8082/eureka/
   | ·   security.user.name=configUser
   | ·   security.user.password=configPassword
   | ·   security.user.role=SYSTEM

   └ 2) Discovery Server HOW-TO:
     └ Discovery Ser. Maven Deps:

     └  Discovery Server IoT Setup:
          class DiscoveryApplication {...}

     └ Discovery Service: Secure Server endpoints:
       @Order(1)                          ←  There are two security configurations for the
       public class SecurityConfig           Discover Serv. endpoints + dashboard.
       extends WebSecurityConfigurerAdapter {
          public void configureGlobal(
                    AuthenticationManagerBuilder auth) 
          protected void configure(HttpSecurity http) {

     └ Secure Discovery Serv. Eureka dashboard:
       public static class AdminSecurityConfig
       extends WebSecurityConfigurerAdapter {
           protected void configure(HttpSecurity http) {
             .antMatchers(HttpMethod.GET, "/").hasRole("ADMIN")
             .antMatchers("/info", "/health").authenticated()

     └ Discovery Service config. Files: 
       - bootstrap.properties                               Must match Discovery serv. in 
         spring.cloud.config.name=discovery               ← configuration repository.
         spring.cloud.config.uri=http://localhost:8081    ← URL of the confi. server 

       - discovery.properties                             ← Add also to application-config Git repo

   └ 3) Gateway Server HOW-TO:
     └ Gateway Ser. Maven Deps:
     └ Gatewy Server IoT Setup:
       public class GatewayApplication {}

     └ Secure Gateway Server:
       public class SecurityConfig extends WebSecurityConfigurerAdapter 
         public void configureGlobal(AuthenticationManagerBuilder auth)
         throws Exception {

       protected void configure(HttpSecurity http) throws Exception {
     └ Secure Gateway Config. files
       - resources/bootstrap.properties:

       - gateway.properties:  (from app-config Git repo)
         eureka.client.region = default
         eureka.client.registryFetchIntervalSeconds = 5
         zuul.routes.book-service.path=/book-service/**            ← route any request to /boot-servie to our Book Ser.app

- Common Maven Depen. for Config Client, Eureka, JPA, Web an Security:

- (Sharing) Session Configuration:
  - Maven dependencies to add to Discovery server, gateway server and micro-service1/2/... servers

  - Add next IoT to Discovery Server and REST APIs.
    public class SessionConfig 
    extends AbstractHttpSessionApplicationInitializer {  }

  - For the Gateway Server:
    @EnableRedisHttpSession(redisFlushMode = RedisFlushMode.IMMEDIATE)
    public class SessionConfig 
    extends AbstractHttpSessionApplicationInitializer {}

  - For the Gateway Server add a simple filter to forward 
    the session so that authentication will propagate to
    another service after login:

    public class SessionSavingZuulPreFilter
    extends ZuulFilter {
      private SessionRepository repository;
      public boolean shouldFilter() {
        return true;
      public Object run() {
        RequestContext context = RequestContext.getCurrentContext();
        HttpSession httpSession = context.getRequest().getSession();
        Session session = repository.getSession(httpSession.getId());
          "Cookie", "SESSION=" + httpSession.getId());
        return null;
      public String filterType() {
        return "pre";
      public int filterOrder() {return 0;}


private final String ROOT_URI = "http://localhost:8080";
private FormAuthConfig formConfig
   = new FormAuthConfig("/login", "username", "password");
public void setup() {
  RestAssured.config = config().redirect(

public void whenGetAllBooks_thenSuccess() {
  Response response = RestAssured.get(ROOT_URI + "/book-service/books");
  Assert.assertEquals(HttpStatus.OK.value(), response.getStatusCode());

// Try to access protected resource:
public void whenAccessProtectedResourceWithoutLogin_thenRedirectToLogin() {
  Response response = RestAssured.get(ROOT_URI + "/book-service/books/1");
  Assert.assertEquals(HttpStatus.FOUND.value(), response.getStatusCode());
  Assert.assertEquals("http://localhost:8080/login", response.getHeader("Location"));

  ┌────────→ @SpringBootApplication      HTTP Request                        ºCLOUDº
beans to      ┌──like─  ─┐
  │           │  these   │         ┌───────────────────┐ ask config       ┌───────────────┐
 ┌────────────┴──┐       v         │ @RestController   ───────────────────→ Configuration │
 │@Configuration │   ┌────────┐    │                   │ config.propert.  │    Server     │
 │               │   │@Service│    │                   │←──────────────── └───────────────┘
 │               │   └──────┬─┘    │  @Autowired       │
 │ @Bean         │          └──┬────→ Service service; │register itself as service
 │ public MyBean │   ┌─────────┴┐  │                   ├──────────────────→──────────┐
 │ providerBean()│   │@Component│  │                   │ask for service   │  Service │
 │               │   └──────────┘  │  @RequestMapping  ├─────────────────→  Discovery│
 └───────────────┘                 │  public Map       │←──────────────── └──────────┘
                                   │   serverRequest() │ URL response

º@EnableConfigServerº  turns app into a server that other apps can get
                       their configuration from.
                       Use spring.application.cloud.config.uri in the
                       client @SpringBootApplication
                       to point to the config server.

º@EnableEurekaServerº  turns your app into an Eureka discovery service

º@EnableDiscoveryClientº makes your app register in the service discovery
                        server and discover other services through it.

º@EnableCircuitBreakerº- configures Hystrix circuit breaker protocols.

º@HystrixCommand(fallbackMethod = “fallbackMethodName”)º
  marks methods to fall back to another method if they cannot succeed normally.
Spring: Non classified
Spring  Batch
FROM https://stackoverflow.com/questions/33188368/spring-batch-vs-quartz-jobs
Quartz is a scheduling framework. Like "execute something every hour 
or every last friday of the month"

Spring Batch is a framework that defines that "something" that will 
be executed. You can define a job, that consists of steps. Usually a 
step is something that consists of item reader, optional item 
processor and item writer, but you can define a custom stem. You can 
also tell Spring batch to commit on every 10 items and a lot of other 
stuff.  From Spring 2 , it can also schedule tasks
(See also https://jcp.org/en/jsr/detail?id=352, Batch applications 
  for the Java Platform)

scan config auto-detection
Spring vs Guice
Reactive (5.0+)
- Note: Servlet 3.1+ API for non-blocking I/O leads away from
  the rest of the Servlet API where contracts are synchronous
  (Filter, Servlet) or blocking (getParameter, getPart).
- fully non-blocking, handling concurrency with a small number of threads
- supports Reactive Streams non-blocking back pressure:
  In synch/imperative code, blocking calls serve as a natural form
  of back pressure that forces the caller to wait.
  In non-blocking code it becomes important to control the rate
   of events so that a fast producer does not overwhelm its destination.
  Spring Reactive Streams is a small spec, also adopted in Java 9,
  that defines the interaction between asynchronous components
  with back pressure. Ex: a data repository (Publisher),
  produces data that an HTTP server (Subscriber), can then "forward"
  to the response. Main purpose of Reactive Streams is to allow
  the subscriber to control how fast or how slow the publisher
  will produce data.
  If a publisher can’t slow down then it has to decide whether
  to buffer, drop, or fail.
- As a general rule WebFlux APIs accept a plain Publisher as input,
  adapt it to Reactor types internally, use those, and then return
  either Flux or Mono as output.
- runs on Netty, Undertow, Servlet 3.1+ containers
- TODO: WebClient
- TODO: WebTestClient
- TODO: WebSocket
- The spring-web module contains the reactive building block:  
  HTTP abstractions, Reactive Streams server adapters, reactive codecs,
  and a core Web API.
- public spring-web APIs Server support is organized in two layers:
  - HttpHandler and server adapters : the most basic, common API for HTTP
    request handling with Reactive Streams back pressure running on different
  - WebHandler API : slightly higher level but still general purpose server
    web API with exception handlers (WebExceptionHandler), filters (WebFilter),
    and a target handler (WebHandler)
    All components work on ServerWebExchange — a container for the HTTP
    request and response that also adds request attributes, session attributes,
    access to form data, multipart data, and more.
- Codecs: The spring-web module provides
  HttpMessageReader(DecoderHttpMessageReader) and
  HttpMessageWriter(EncoderHttpMessageWriter) for encoding and decoding the
  HTTP request and response body with Reactive Streams.
  Basic Encoder and Decoder implementations exist in spring-core but
  spring-web adds more for JSON, XML, and other formats.

- central controller
- discovers delegate components from Spring configuration
  If declared with the bean name "webHandler" it is in turn
  discovered by WebHttpHandlerBuilder which puts together a
  request processing chain as described in WebHandler API
- typical WebFlux application Spring configuration:
  - DispatcherHandler named "webHandler"
  - WebFilters
  - WebExceptionHandlers
  - DispatcherHandler special beans
  - Others
- The configuration is given to WebHttpHandlerBuilder to
  build the processing chain:
 (The resulting HttpHandler is ready for use with a server adapter)
  ApplicationContext context = ...
  HttpHandler handler = WebHttpHandlerBuilder.
- "special beans":  Spring-managed instances implementing one of the contracts listed:

  Bean type            | Explanation
  HandlerMapping       | Map a request to a handler.
                       | mapping is based on some criteria
                       | the details of which vary by
                       | HandlerMapping implementation 
                       | (annotated controllers,
                       | simple URL pattern mappings,...)
  HandlerAdapter       | Helps the DispatcherHandler to
                       | invoke a handler mapped to a
                       | request regardless of how the
                       | handler is actually invoked.
                       | For example invoking an annotated
                       | controller requires resolving
                       | various annotations. The main
                       | purpose of a HandlerAdapter
                       | is to shield the DispatcherHandler
                       | from such details.
  HandlerResultHandler | Process the HandlerResult returned
                       | from a HandlerAdapter

- request flow:
  for map in HandlerMapping_list:
    //  (continue is map doesn't match request)
    handler = first handler in map matching request
    HandlerResult res = handler()

  1) Each HandlerMapping is asked to find a
     matching handler and the first match is used
  2) If a handler is found, it is executed through
     an appropriate HandlerAdapter which exposes
     the return value from the execution as
  3) The HandlerResult is given to an appropriate
     HandlerResultHandler to complete processing
     by writing to the response directly or using
     a view to render.

BºProcessing Chainº
- The processing chain can be put together with WebHttpHandlerBuilder which builds an
HttpHandler that in turn can be run with a server adapter.
To use the builder either add components individually or point to an ApplicationContext
to have the following detected:

 │Bean name             │Bean type            │Count│ Description
 │webHandler            │WebHandler           │1    │ Target handler after filters
 │"any"                 │WebFilter            │0..N │ Filters
 │"any"                 │WebExceptionHandler  │0..N │ Exception handlers after filter chain
 │webSessionManager     │WebSessionManager    │0..1 │ Custom session manager
 │                      │                     │     │ DefaultWebSessionManager by default
 │serverCodecConfigurer │ServerCodecConfigurer│0..1 │ Custom form and multipart data decoders
 │                      │                     │     │ ServerCodecConfigurer.create() by default
 │localeContextResolver │LocaleContextResolver│0..1 │ Custom resolver for LocaleContext;
 │                      │                     │     │ AcceptHeaderLocaleContextResolver by default

BºRequired dependenciesº
Server name     │  Group id              │ Artifact name      │  Code snippet
Reactor Netty   │ io.projectreactor.ipc  │ reactor-netty      │ HttpHandler handler = ...
                │                        │                    │ ReactorHttpHandlerAdapter adapter =
                │                        │                    │     new ReactorHttpHandlerAdapter(handler);
                │                        │                    │ HttpServer.create(host, port).
                │                        │                    │     newHandler(adapter).block();
Undertow        │ io.undertow            │ undertow-core      │ HttpHandler handler = ...
                │                        │                    │ UndertowHttpHandlerAdapter adapter =
                │                        │                    │      new UndertowHttpHandlerAdapter(handler);
                │                        │                    │ Undertow server = Undertow.builder().
                │                        │                    │      addHttpListener(port, host).
                │                        │                    │      setHandler(adapter).build();
                │                        │                    │ server.start();
Tomcat          │ org.apache.tomcat.embe │ omcat-embed-core   │ HttpHandler handler = ...
                │                        │                    │ Servlet servlet = new
                │                        │                    │     TomcatHttpHandlerAdapter(handler);
                │                        │                    │
                │                        │                    │ Tomcat server = new Tomcat();
                │                        │                    │ File base = new File(
                │                        │                    │    System.getProperty("java.io.tmpdir"));
                │                        │                    │ Context rootContext = server.
                │                        │                    │    addContext("", base.getAbsolutePath());
                │                        │                    │ Tomcat.addServlet(rootContext, "main", servlet);
                │                        │                    │ rootContext.addServletMappingDecoded("/", "main");
                │                        │                    │ server.setHost(host);
                │                        │                    │ server.setPort(port);
                │                        │                    │ server.start();
Jetty           │ org.eclipse.jetty      │ etty-server        │ HttpHandler handler = ...
                │                        │ etty-servlet       │ Servlet servlet =
                │                        │                    │     new JettyHttpHandlerAdapter(handler);
                │                        │                    │
                │                        │                    │ Server server = new Server();
                │                        │                    │ ServletContextHandler contextHandler =
                │                        │                    │     new ServletContextHandler(server, "");
                │                        │                    │ contextHandler.addServlet(
                │                        │                    │     new ServletHolder(servlet), "/");
                │                        │                    │ contextHandler.start();
                │                        │                    │
                │                        │                    │ ServerConnector connector =
                │                        │                    │     new ServerConnector(server);
                │                        │                    │ connector.setHost(host);
                │                        │                    │ connector.setPort(port);
                │                        │                    │ server.addConnector(connector);
                │                        │                    │ server.start();

GrallVM issues
Working toward GraalVM native image support without requiring additional
configuration or workaround is one of the themes of upcoming Spring Framework
5.3. The main missing piece for considering GraalVM as a suitable deployment
target for Spring applications is providing custom GraalVM Feature
implementation at Spring Framework level to automatically register classes
used in the dependency mechanism or Spring factories, see the related issue #
22968 for more details.
STOMP: WebSockets
JHipster is a development platform to generate, develop and deploy 
Spring Boot + Angular / React / Vue Web applications and Spring 
Async/Reactive Programming
Labmdas Intro
- addition of lambda expressions in Java 8 provides for functional APIs in Java
  and simplifies development of non-blocking style APIs
  (low-level CompletableFuture or higher level ReactiveX).

  The conventional computing model of a Touring machine takes for granted
  that data is already available to be processed.

  In the Internet era, data is arriving at random and we don't want to block
  our CPU in an infinite loop waiting for such data to arrive.

  The conventional approach is to let the OS scheduler to divide the CPU
  into threads or processes sharing the hardware at defined intervals.
  While this approach works well for standard load scenarios, it fails for
  "moderm" workloads with thousands or tens of thousands of simultaneous
  clients accessing the server. Each new OS thread requires some extra
  memory on the OS kernel (about 2Kilobytes per Thread, and even more
  per Process). Switching from thread to thread or process to process becomes
  expensive or prohibitive with that number of concurent I/O flows.
  This is even worse when our server is virtualized with many other
  competing VMs running on the same physical server.

  Async programming will try to reause the same thread by many different
  clients or flows of I/O data providing a much better ussage of hardware
  resources and avoiding unnecesary context-switches between threads or

  The term "reactive" refers to programming models that are built around
  reacting to change — network component reacting to I/O events, UI controller
  reacting to mouse events, etc. In that sense non-blocking is reactive because
  instead of being blocked we are now in the mode of reacting to notifications
  as operations complete or data becomes available.

  Spring Reactive Streams is a small spec, also adopted in Java 9, that defines
  the interaction between asynchronous components with back pressure. For
  example a data repository — acting as Publisher, can produce data that an
  HTTP server — acting as Subscriber, can then write to the response. The main
  purpose of Reactive Streams is to allow the subscriber to control how fast or
  how slow the publisher will produce data.

  Reactive Streams is of interest to low-level reusable libraries but
  no final applications are better suites using a higher level and richer
  (functional) API like Java8+ Collection-Stream API or more ingeneral APIs
  like those provided by ReactiveRX.

  Reactive programming can also be compared with the way data flows in Unix
  pipelines when handling text files. In the next Unix command there is a
  file input (it can be a real file in the hard-disk or a socket receiving
  data) and the different commands in the pipe consume STDIN and result to
  STDOUT for further processing.
  $ cat input.csv | grep "...." | sort | uniq | ... ˃  output.csv
  Reactive Java frameworks are ussually much fasters since everything executes
  on the same process (a Unix pipeline requires the help of the underlying
  OS to work), and the type of input/output data can be any sort of Java
  object (not just file text).
- JDK 1.9+
- Reactive Streams was adopted by the JDK in the form of the java.util.concurrent.Flow API.
- It allows two different libraries that support asynchronous streaming to connect to each other,
    with well specified semantics about how each should behave, so that backpressure, completion, cancellation
    and error handling is predictably propagated between the two libraries.
- There is a rich ecosystem of open source libraries that support Reactive Streams,
    and since its inclusion in JDK9, there are a few in development implementations that are
    targetting the JDK, including the incubating JDK9 HTTP Client,
    and the Asynchronous Database Adapter (ADBA)
    effort that have also adopted it
- (See also What can Reactive Streams offer to EE4J)

ReactiveX provides a set of very-well thought-out cross-language abstraction to
implement the reactive patterns.

                                    unsubscribe  : - Observable can opt to stop
                                                   event-emission if no more clients
                                                   are subcribed
 Loop-of-Observable-emitted-events:                - unsubscription will cascade back
    Observable → ˂˂IObserver˃˃: onNext(event)      through the chain of operators
    observer → observer  : handle event            applying to associated Observable.
    also called
    "reactor"  ("reactor pattern")
 Observable → ˂˂handlerInstance˃˃: onCOmpleted()
 observer → observer  : handle event

 RºWARN:º There is no canonical naming standard in RXJava

 ºObservable˂T˃º → operator1 → ... → operatorN → ºObserverº
Oºpushesºobjects                                  Subscribes to
 (events) from                                    the observable
 any source                                       events
 (ddbb, csv,...)                                  .onNext()
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^   .onCompleted()
            Learning which operators to use       .onError()
            for a situation and how to combine    .onSubscribe(Disposable d);
            them is the key to master RxJava      ^
            them is the key to mastering RxJava   │
├─onSubscribe(Disposible): Free any resources created by the rxJava pipeline.
├─onNext()     : passes each item, one at a time,   to the ºObserverº
├─onCompleted(): communicates a completion event    to the ºObserverº
└─onError()    : communicates an error up the chain to the ºObserverº
                 where the Observer typically defines how to handle it.
                 Unless retry() operator is used to intercept the error,
                 the Observable chain typically terminates, and no
                 more emissions will occur.
                 See also Gºoperators 'Catch' and 'Retry'º

By default, Observables execute work on the immediate thread,
which is the thread that declared the Observer and subscribed it.
Not all Observables will fire on the immediate thread, (Observable.interval(),...)

ºcreating a source Observable:º
  Observable˂String˃ source00 = Observableº.justº("value1",...,"valueN");
  Observable˂String˃ source00 = Observableº.fromCallableº(() -˃ 1/0);
                                            ^^^^^^^^^^^^        ^^^
                                            Similar to .just() but errors
                                            are captured by the rxJava "pipeline"

  Observable˂String˃ source01 = Observableº.createº( emitter -˃
        try {
    emitter .onNext (myList.get(1));
    emitterº.onComplete()º; // ← Optional
        } catch(Throwable e) {
  } );
  Observable˂String˃  source02 = Observableº.fromIterableº(myIterableList);
  Observable˂Integer˃ source03 = Observableº.rangeº(1,10);
  Observable˂String˃  source04 = Observableº.intervalº(1, TimeUnit.SECONDS);
                                             Since it operates on a timer →
                                             needs to run on separate thread
                                             and will run on the computation
                                             Scheduler by default

  Observable˂String˃  source05 = Observableº.fromFutureº(myFutureValue);
  Observable˂String˃  source06 = Observableº.emptyº();
                                             calls onComplete() and ends

  Observable˂String˃  source07 = Observableº.deferº( () -˃ Observable.range(start,count));
                                      Advanced factory pattern.
                                      allows a separate state for each observer
  Observable˂String˃  source08 = Observableº.fromCallableº( () -˃ Observable.range(start,count));

ºcreating Single/Maybe/Completable "Utility" Observables:º
 │ Single.just("Hello")        │ Maybe.just("Hello")            │ Completable.fromRunnable(       │
 │ .subscribe(...);            │   .subscribe(...);             │   () -˃ runProcess() )          │
 │                             │                                │ .subscribe(...);                │
 │ Emits a single item         │ Emits (or not)a single item    │  does not receive any emissions │
 │ºSingleObserverº             │ ºMaybeObserverº                │ ºCompletableObserverº           │
 │ .onSubscribe(Disposable d); │  .onSubscribe(Disposable d);   │  .onSubscribe(Disposable d);    │
 │ .onSuccess(T value);        │  .onSuccess(T value);          │  .onComplete();                 │
 │ .onSuccess(Throwable error);│  .onSuccess(Throwable error);  │  .onError(Throwable error);     │
 │                             │  .onComplete();                │                                 │

ºCreate Test-oriented observablesº

ºDerive Observables from source:º
Observable˂Integer˃ lengths  =  sourceº.mapº   (String::length);
Observable˂Integer˃ filtered = lengthsº.filterº(i -˃ i  = 5);

ºcreating an Observer:º
(Lambdas in the source Observable .subscribe can be used in place)
Observer˂Integer˃ myObserver = new Observer˂Integer˃() {
  @Override public void onSubscribe(Disposable d) { //... }
  @Override public void onNext(Integer value)     { log.debug("RECEIVED: " + value); }
  @Override public void onError(Throwable e)      { e.printStackTrace();}
  @Override public void onComplete()              { log.debu("Done!"); }

BºCold/Hot Observablesº
  └ Cold: - Repeat the same content to different observers.
          - Represent sort-of inmmutable data.
          - A "cold" Observable waits until an observer subscribes to it
            an observer is guaranteed to see the whole sequence of events
  └ Hot : "Broadcast" to all observers at the same time.
          - A "hot" Observable may begin emitting items as soon as it is created.
          - An observer connecting "later" will loose old emissions.
          - Representºreal-time eventsº. They are time-sensitive.
          - Emissions will start when first observers calls connect().
          - a cold/hot observable can generate a new hot observable by
            calling publish() that will return a hot ConnectableObservable.
            Helpful to avoid the replay of data on each subscrived Observer.
          - NOTE: A "Connectable" Observable: does NOT begin emitting items
              until its Connect method is called, whether or not any observers
              have subscribed to it.
BºAbout Nullº
 └ In RxJava 2.0, Observables ☞GºNO LONGER SUPPORT EMITTING null VALUESº☜ !!!

BºDecision Treeº (Choosing the right Operator for a task)
  (REF: @[http://reactivex.io/documentation/operators.html#tree])
  └ Alphabetical List of Observable Operators

BºCore APIº
  └ºrx.Observableº"==" [Java 8 Stream + CompletableFuture + "Back-presure" measures ]
    @[http://reactivex.io/documentation/observable.html]    └──────────────┬──────┘
                                                            probably an intermediate
                                                            buffer for incomming/outgoing
                                                            messages that acts async.
                                                            when not full, and sync when
    - ºrx.Singleº: specialized version emiting a single item
    - Compose Observables in a chain
    - gives the real "reactive" power
    - operators allow to transform, combine, manipulate, and work
      with the sequences of items emitted by Observables.
    - declarative programming
      - Most operators operate on an Observable and return an Observable.
        Each operator in the chain modifies the Observable that results
        from the operation of the previous operator. Order matters.
        (the Builder Pattern, also supported, is non-ordered)
    sort of "bridge or proxy" is available in some implementations
    that acts both as an observer and as an Observable.
    - Needed when using multithreading into the
      cascade of Observable operators.
    - By default, the chain of Observables/operators
      will notify its observers ºon the same threadº
     ºon which its Subscribe method is calledº
    Operator|SubscribeOn         |ObserveOn
            |sets an Scheduler on|sets an Scheduler used
            |which the Observable|by the Observable to
            |should operate.     |send notifications to
            |                    |its observers.
    Scheduler "==" Thread

External links: - Rx Workshop: Introduction @[https://channel9.msdn.com/Series/Rx-Workshop/Rx-Workshop-Introduction] - Introduction to Rx: IObservable @[http://introtorx.com/Content/v1.0.10621.0/02_KeyTypes.html#IObservable] - Mastering observables (from the Couchbase Server documentation) @[https://developer.couchbase.com/documentation/server/3.x/developer/java-2.0/observables.html] - 2 minute introduction to Rx by Andre Staltz (“Think of an Observable as an asynchronous immutable array.”) @[https://medium.com/@andrestaltz/2-minute-introduction-to-rx-24c8ca793877] - Introducing the Observable by Jafar Husain (JavaScript Video Tutorial) @[https://egghead.io/lessons/rxjs-introducing-the-observable] - Observable object (RxJS) by Dennis Stoyanov @[http://xgrommx.github.io/rx-book/content/observable/index.html] - Turning a callback into an Rx Observable by @afterecho @[https://afterecho.uk/blog/turning-a-callback-into-an-rx-observable.html]
Ops.classification 1 ºOperators creating new Observablesº @[http://reactivex.io/documentation/operators.html#creating] ºCreate º create Observable from scratch programmatically ºDefer º do not create the Observable until the observer subscribes, and create a fresh Observable for each observer ºEmpty º create Observables that have very precise and limited behavior ºNever º " ºThrow º " ºFrom º create some object or data structure ºInterval º create Observable that emits a sequence of integers spaced by a particular time interval ºJust º convert an object or a set of objects into an Observable that emits that or those objects ºRange º create an Observable that emits a range of sequential integers ºRepeat º create an Observable that emits a particular item or sequence of items repeatedly ºStart º create an Observable that emits the return value of a function ºTimer º create an Observable that emits a single item after a given delay ºOperators Transforming Items:º @[http://reactivex.io/documentation/operators.html#transforming] ºBuffer º periodically gather items from input into bundles and emit these bundles rather than emitting the items one at a time ºFlatMap º transform the items emitted by an Observable into Observables, then flatten the emissions from those into a single Observable ºGroupBy º divide an Observable into a set of Observables that each emit a different group of items from the original Observable, Gºorganized by keyº ºMap º transform each input-item by applying a function ºScan º apply a function to each item emitted by an Observable, sequentially, and emit each successive value ºWindow º periodically subdivide items from an Observable into Observable windows and emit these windows rather than emitting the items one at a time ºOperators selectively filtering emitted events from a source Observableº @[phttp://reactivex.io/documentation/operators.html#filtering] ºDebounce º only emit an item from an Observable if a particular timespan has passed without it emitting another item ºDistinct º suppress duplicate items emitted by an Observable ºElementAtº emit only item n emitted by an Observable ºFilter º emit only those items from an Observable that pass a predicate test ºFirst º emit only the first item, or the first item that meets a condition, from an Observable ºIgnoreElements º do not emit any items from an Observable but mirror its termination notification ºLast º emit only the last item emitted by an Observable ºSample º emit the most recent item emitted by an Observable within periodic time intervals ºSkip º suppress the first n items emitted by an Observable ºSkipLast º suppress the last n items emitted by an Observable ºTake º emit only the first n items emitted by an Observable ºTakeLast º emit only the last n items emitted by an Observable ºOperators Combining multiple source Observables into a new single Observableº @[http://reactivex.io/documentation/operators.html#combining] ºAnd º combine sets of items emitted by two or more Observables by means ºThen º of Pattern and Plan intermediaries ºWhen º ºCombineLatest º when an item is emitted by either of two Observables, combine the latest item emitted by each Observable via a specified function and emit items based on the results of this function ºJoin º combine items emitted by two Observables whenever an item from one Observable is emitted during a time window defined according to an item emitted by the other Observable ºMerge º combine multiple Observables into one by merging their emissions ºStartWithº emit a specified sequence of items before beginning to emit the items from the source Observable ºSwitch º convert an Observable that emits Observables into a single Observable that emits the items emitted by the most-recently- emitted of those Observables ºZip º combine multiple Observables emissions together via a function function → emit single items for each input tuple ºOperators handling Errors and helping to recover from error-notificationsº @[http://reactivex.io/documentation/operators.html#error] ºCatch º recover from an onError notification by continuing the sequence without error ºRetry º if a source Observable sends an onError notification, resubscribe to it in the hopes that it will complete without error ºUtility Operators "toolbox"º @[http://reactivex.io/documentation/operators.html#utility] ºDelay º shift the emissions from an Observable forward in time by a particular amount ºDo º register an action to take upon a variety of Observable lifecycle events ºMaterialize º represent both the items emitted and the notifications sent ºDematerializeº as emitted items, or reverse this process ºObserveOn º specify the scheduler on which an observer will observe this Observable ºSerialize º force an Observable to make serialized calls and to be well-behaved ºSubscribe º operate upon the emissions and notifications from an Observable ºSubscribeOn º specify the scheduler an Observable should use when it is subscribed to ºTimeIntervalº convert an Observable that emits items into one that emits indications of the amount of time elapsed between those emissions ºTimeout º mirror the source Observable, but issue an error notification if a particular period of time elapses without any emitted items ºTimestamp º attach a timestamp to each item emitted by an Observable ºUsing º create a disposable resource that has the same lifespan as the Observable ºConditional and Boolean Operators evaluating one or moreº Observables or items emitted by Observables @[http://reactivex.io/documentation/operators.html#conditional] ºAll º determine whether all items emitted by an Observable meet some criteria Mathematical and Aggregate Operators @ ºAmb º given two or more source Observables, emit all of the items from only the first of these Observables to emit an item @[http://reactivex.io/documentation/operators.html#mathematical] ºContains º determine whether an Observable emits a particular item or not Average, Concat, Count, Max, Min, Reduce, and Sum C ºDefaultIfEmptyº emit items from the source Observable, or a default item if the source Observable emits nothing onverting Observables @ ºSequenceEqual º determine whether two Observables emit the same sequence of items ºSkipUntil º discard items emitted by an Observable until a second Observable emits an item To C ºSkipWhile º discard items emitted by an Observable until a specified condition becomes false ºConnectable Observable Operatorsº [http://reactivex.io/documentation/operators.html#connectable] @ ºTakeUntil º discard items emitted by an Observable after a second Observable emits an item or terminates ºTakeWhile º discard items emitted by an Observable after a specified condition becomes false ºConnect º ºPublish º ºRefCount º ºReplay º ºMathematical and Aggregate Operatorsº - Operators that operate on the entire sequence of items emitted by an Observable ºAverage º calculates the average of numbers emitted by an Observable and emits this average ºConcat º emit the emissions from two or more Observables without interleaving them ºCount º count the number of items emitted by the source Observable and emit only this value ºMax º determine, and emit, the maximum-valued item emitted by an Observable ºMin º determine, and emit, the minimum-valued item emitted by an Observable ºReduce º apply a function to each item emitted by an Observable, sequentially, and emit the final value ºSum º calculate the sum of numbers emitted by an Observable and emit this sum ºBackpressure Operatorsº a variety of operators that enforce particular flow-control policies @[http://reactivex.io/documentation/operators/backpressure.html] - backpressure operators ºstrategiesº for coping with Observables that produce items more rapidly than their observers consume them ºConnectable Observable Operatorsº Specialty Observables that have more precisely-controlled subscription dynamics ºConnect º instruct a connectable Observable to begin emitting items to its subscribers ºPublish º convert an ordinary Observable into a connectable Observable ºRefCountº make a Connectable Observable behave like an ordinary Observable ºReplay º ensure that all observers see the same sequence of emitted items, even if they subscribe after the Observable has begun emitting items ºOperators to Convert Observablesº ºToº convert an Observable into another object or data structure
Ops.classification 2 - Basic Operators: - Suppressing operators: - filter, take, skip, takeWhile/skipWhile, distinct, distinctUntilChanged - Transforming operators: - map, cast, startWith, defaultIfEmpty, switchIfEmpty, sorted, deplay,repeat, scan - Reducing operators: - count, reduce, all, any, contains - Collection operators: - toList, toSortedList, toMap, toMultiMap, collect - Error recovery Operators - onErrorReturn, onErrorReturnItem, onErrorResumeNext, retry - Action ("stream life-cicle") Operators: - doOnNext, doOnComplete, doOnError, doOnSubscribe, doOnDispose - Combining Observables: - Merging: - merge, mergeWith - flatMap - Concatenation: - concat, concatWith - concatMap - Ambiguous: - amb - Zipping - Combine Latest: - withLatestFrom - Grouping: - groupBy - Multicasting, Replaying and Caching: ( Multicasting is helpful in preventing redundant work being done by multiple Observersand instead makes all Observers subscribe to a single stream, at least to the point wherethey have operations in common) - "Hot" operators. (TODO) - Automatic connection: - autoConnect, refCount, share - replay - cache -Subjects - Just like mutable variables are necessary at times even though you should strive forimmutability, Subjects are sometimes a necessary tool to reconcile imperative paradigmswith reactive ones. - PublishSubject - Serializin Subject - BehaviourSubject - ReplaySubject - AsyncSubject - UnicastSubject Custom Ops @[http://reactivex.io/documentation/implement-operator.html]
Awaitility(Async→Sync) Tests
- Awaitility: DSL allowing to express async results (test expectations) easely. 
  removing complexity of handling threads, timeouts, concurrency issues, ...
  that obscured test code.

- Ex 1:
  public void updatesCustomerStatus() {
    // Publish a (async) message to a message broker:
  Bºawait().atMost(5, SECONDS).until(customerStatusIsUpdated())º;
Spring Reactor
"""Why Reactor when there's already RxJava2?
   RxJava2 is java 6 while for Reactor the Spring team decided to go all in
   and focus only on Java 8. This means that you can make use of all the new
   and fancy Java 8 features.

   If you are going to use Spring 5, Reactor might be the better option.

   But if you are happy with your RxJava2, there is no direct need to migrate to Reactor."""
(Forcibly incomplete but still quite pertinent list of core developers and companies)
Tim Fox     :  Initiated VertX in 2012
Julien Viet :  Project lead (as of 2020), RedHat, Marseille
               He is also core develoepr of Crash @[http://www.crashub.org/]

Julien Ponge:@[https://julien.ponge.org/]
               Author of VertX in Action
Many others :@[https://github.com/eclipse-vertx/vert.x/graphs/contributors]
Vert.X Summary
- Vert.X guide for java devs @[https://github.com/vert-x3/vertx-guide-for-java-devs]
- VertX maven starter        @[https://github.com/vert-x3/vertx-maven-starter]
- Examples for amqp-bridge,  @[https://github.com/vert-x3/vertx-examples]
  grpc, core, docker,
  gradle*/maven*, ignite, jca,
  jdbc.  kafka, kotlin, mail,
  metrics, mqtt, openshift3,
   redis, resteasy, rx,
  service-proxy, shell, spring,
  sync, unit, web/web-client ...
- Webºserver examplesº       @[https://github.com/vert-x3/vertx-examples/tree/master/web-examples/src/main/java/io/vertx/example/web]
  angular*, auth, authjdbc,
  blockinghandler, chat,
  cookie, cors,
  custom_authorisation, form,
  helloworld, http2, , jwt,
  mongo, react, realtime, rest,
  sessions, staticsite,
  templating, upload, vertxbus
- Web/ºJDBCºserver examples @[https://github.com/vert-x3/vertx-examples/blob/master/web-examples/src/main/java/io/vertx/example/web/jdbc/Server.java]

REF: @[https://github.com/vert-x3/vertx-guide-for-java-devs/blob/3.8/intro/README.adoc]
-ºreusable unitº of Bºdeploymentº
                  - Can be passed some Gºconfigurationº like
                    credentials, network address,...
                  - can be deployed several times
                  - A verticle can deploy other verticles.

Oºverticleº1 ←────→ event─loop 1 ←──────→   1 Thread
  ^                 ^^^^^                     ^^^^^^
  │                 "input" event like        Must not handle I/O thread-blocking
  │                 network buffers,          or CPU intensive operations
  │                 timing events,            'executeBlocking' can be used
  │                 verticles messages, ...   to offload the blocking I/O operations
  │                                           from the event loop to a worker thread
˂˂io.vertx.core.AbstractVerticle˃˃ Base Class
-Oº.start()º ← life-cycle sync/async method to be overrided
-Oº.stop ()º ← life-cycle sync/async method to be overrided
-Oº.vertxº   ← - Points to the BºVert.x environment where the verticle is deployedº
   ^^^^^^    · - provides methods to create HTTP/TCP/UDP/... servers/clients.
             ·   Ex:
             ·   │ io.vertx.core.http.HttpServer server  = thisOº.vertxº.createHttpServer();
             · - provides access to the event bus.
             ·   Ex:
             ·    ºSENDINGºVERTICLE:                        │ºRECEIVINGºVERTICLE:
             ·   ┌──────────────────────────────────────────┼────────────────────────────────────────
             ·   │ ...                                      │ ...
             ·   │Oºvertxº.eventBus()                       │ public void onMessage(
             ·   │   .request(wikiDbQueue,                  │          Message message)
             ·   │       jsonObject, options ,              │ {
             ·   │                   ^^^^^^^                │   String action = message.
             ·   │                   headers                │                headers().get("action");
             ·   │                  +payload codecs         │
             ·   │                  +tiemouts               │   switch (action) {
             ·   │       ^^^^^^^^^^  ^^^^^^^                │     case "action1":
             ·   │// Ussually jsonObject contains the data  │       ...
             ·   │// and an "action" header the action to   │       message.reply(
             ·   │// be executed by the receiving verticle  │           new JsonObject()
             ·   │       reply -˃ {                         │           .put("key1", value1));
             ·   │     if (reply.succeeded()) {             │       break;
             ·   │       ...                                │     case ...:
             ·   │     } else {                             │       ...
             ·   │       ...                                │     default:
             ·   │     }                                    │       message.fail(
             ·   │   });                                    │         ErrorCodes.BAD_ACTION.ordinal(),
             ·   │                                          │         "Bad action: " + action);
             ·   │                                          │   }
             ·   │                                          │ }

-Oº.config()º← - accessors to some deployment configuration to allow passing G*external configuration*
                 │ public static final String CONFIG_WIKIDB_QUEUE = "wikidb.queue";
                 │ ...
                 │ wikiDbQueue =Oºconfig()º.getString(CONFIG_WIKIDB_QUEUE, "wikidb.queue");
                 │                             ^^^^^^                      ^^^^^^^^^^^^^^
                                       or Integers, booleans               Default param
                                       complex JSON data, ...              if first is null

import io.vertx.core.AbstractVerticle;
public class MainVerticle extends AbstractVerticle {

  public void start(Future˂Void˃ startFuture) {
                    No params for the sync version

Event Bus
- main tool for communication between verticles using *messages*
  and one of:
  - point-to-point messaging
  - request-response messaging
  - publish / subscribe for broadcasting messages

   verticle 01                                      verticle 02
  (HTTP server)             event─bus              (DDBB client)
   │                          ║ ║                        │
   ├─── user 1234 ? ─────────→║ ║                        │
   │                          ║ ║─────── user 1234 ? ──→ │
   │                          ║ ║                        ├── ....─→
   │                          ║ ║                        │←─ ....
   │                          ║ ║←────── user 1234 ? ────┤
   │←── user 1234   ──────────║ ║                        │
        ^^^^^^^^^             ║ ║
  Message are free-form
  strings. (JSON recomended    ^
  for multi─language support   │

                  - It can be accessed through (simple)
                    TCP protocol for 3rd party apps
                    or exposed over general-purpose
                    messaging bridges (AMQP, Stomp,...)
                  - Support cluster support sending
                    messages to verticles deployed⅋running
                    in different application nodes
                  - a O*SockJS* bridge allows web applications
                    to seamlessly communicate over the event bus
                    from JavaScript running in the browser by
                    receiving and publishing messages just like
                    any verticle would do.

threading conf

By default Vert.x attaches CPU-core-thread 1˂--˃2 event loops
VertX Threading Strategies:
   Incoming network data -˃ accepting thread"N":
   accepting thread "N" -˃ event-loop thread: +event with data

When a verticle opens a network server and is deployed more than once,
then the events are being distributed to the verticle instances in a
round-robin fashion which is very useful for maximizing CPU usage with
lots of concurrent networked requests.

└ Ex.1:
  @RunWith(VertxUnitRunner.class)  ← annotation to JUnit tests to allow vertx-unit features
  public class SampleHttpServerTest {
    private B*Vertx vertx*;
    @Before public void prepare() { B*vertx*= Vertx.vertx(); }
    @After public void finish(TestContext O*context*) {
    public void start_http_server
           (TestContext O*context*) {
                // provided by the runner
                // provides access to basic assertions,
                // a context to store data,
                // and several async-oriented helpers
      Async Q*async* = O*context*.async();
           req -˃ req.response().putHeader("Content-Type", "text/plain").end("Ok")
          server -˃ {
            WebClient webClient = WebClient.create(vertx);
            webClient.get(8080, "localhost", "/").send(ar -˃ {
              if (ar.succeeded()) {
                HttpResponse˂Buffer˃ response = ar.result();
              O*context*.assertEquals("text/plain", response.getHeader("Content-Type"));
              O*context*.assertEquals("Ok", response.body().toString());
              } else {

└ Ex.2: check/test that a timer task has been called once,
        and that a periodic task has been called 3 times.

  public class WikiDatabaseVerticleTest {
    private Vertx vertx;
    @Before public void prepare(TestContext context) { vertx = ...       }
    @After  public void finish(TestContext context)  { vertx.close(...); }
    @Test /*(B*timeout=5000*)*/
    public void async_behavior(TestContext context) { // <1>
      Vertx vertx = Vertx.vertx();
      Async a1 = context.async();
      Async a2 = context.async(3); // ← works as a countdown that
                                        completes successfully after 3 calls.
      vertx.setTimer(100, n -> a1.complete());
      vertx.setPeriodic(100, n -> a2.countDown());
    public void crud_operations(TestContext context) {
      Async async = context.async();
      service.createPage(..., ...,
        context.asyncAssertSuccess(v1 -> {
            context.asyncAssertSuccess(json1 -> {
              async.complete();  // <1>
Maven Bootstrap

Ex: minimally viable wiki written with Vert.x

   └ Features:
     - server-side rendering
     - data persistence through a JDBC connection
       and async ddbb access

   └ Dependencies:
     - Vert.x web: "elegant" APIs to deal with routing, request payloads, etc.
     - Vert.x JDBC client: asynchronous API over JDBC.
     - other libreries for HTML/md rendering

                                          ┌ o configured pom.xml:
 $º$ URL="https://github.com/vert-x3" º   │   - Maven Shade Plugin configured to create a single  
 $º$ URL="${URL}/vertx─maven─starter" º ←─┤     "fat" Jar archive with all required dependencies  
 $º$ git clone ${URL} project01       º   │   - Exec Maven Plugin to provide the exec:java goal   
 $º$ cd project01                     º   │     that in turns starts the application through the  
                                          │     Vert.x io.vertx.core.Launcher class.              
                                          │     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                                          │     (equivalent to running using the vertx cli tool)
                                          │ o sample verticle
                                          │ o unit tests
                                          └ o redeploy.sh: auto compile+redeploy on code changes.
                                              (adjust $VERTICLE in script to match main verticle)
 $º$ mvn package exec:java           º ← check that maven install is correct

Tip: The SQL database modules supported by the Vert.x project do not currently
   offer anything beyond passing SQL queries (e.g., an object-relational mapper)
   as they focus on providing asynchronous access to databases.
   However, nothing forbids using more advanced modules from the community,
   and we especially recommend checking out projects like this jOOq generator
   for VertX

$ mvn clean package
$ java -jar target/project01-SNAPSHOT-fat.jar
Create HttpServer

   import io.vertx.ext.web.handler.BodyHandler;
   public class MainVerticle ºextends io.vertx.core.AbstractVerticleº {
      private io.vertx.core.http.HttpServer server = vertx.createHttpServer(); // vertx defined in Parent Class

      public void start(Future startFuture) {
          Json.mapper.registerModule(new JavaTimeModule());
          FileSystem vertxFileSystem = vertx.fileSystem();
          vertxFileSystem.readFile("swagger.json", readFile -> {
              if (readFile.succeeded()) {
                  // Swagger swagger = new SwaggerParser().parse(readFile.result().toString(Charset.forName("utf-8")));
                  // SwaggerManager.getInstance().setSwagger(swagger);
                  Router router = Router.router(vertx);
                  router.get  (ROUTE_ENTITY01+"/:id")                              .handler(this:: GetEntity01Handler);
                  router.post (ROUTE_ENTITY01       ).handler(BodyHandler.create()).handler(this::PostEntity01Handler);
                                                              └───────┬───────────┘         └───────────┬───────────┘
                                                decode POST req.body (forms,...) to         function signature:
                                                Vert.x buffer objects                       void functionName(ºRoutingContext contextº)
                  router.delete(ROUTE_ENTITY01+"/:id")                              .handler(this::DeleEntity01Handler);
                  log.info("Starting Server... Listening on "+RC.host+":"+RC.port);         
                           8080,                                  ← Different deployments can share the port. Vertx will round-robin
                           /* AsyncResult */ar -˃ { //
                            if (ar.succeeded()) {
                              LOGGER.info("HTTP server running on port 8080");
                            } else {
                              LOGGER.error("Could not start a HTTP server", ar.cause());
              } else {
Reusable Verticles

- resulting verticles will not have direct references to each other 
  as they will only agree on destination names in the event bus as well 
  as message formats.
- messages sent on the event bus will be encoded in JSON.

- Ex:
 MainVerticle.java      ← its sole purpose is to bootstrap the app
                          and deploy other verticles.
  package io.vertx.guides.wiki;

  public class MainVerticle extends AbstractVerticle {
    public void start(Promise promise) {
      Promise promise01 = Promise.promise();
      vertx.deployVerticle(new WikiDatabaseVerticle(), promise01);
      promise01.future().compose(id -> {
        Promise promise02 = Promise.promise();
          "io.vertx.guides.wiki.HttpServerVerticle",  // <4>
          new DeploymentOptions().setInstances(2),    // <5>
        return promise02.future();
      }).setHandler(ar -> {
        if (ar.succeeded()) {
        } else {
OpenAPI: contract-driven Dev
- "Contract Driven Development" (or API Design First approach) is a methodology
  that uses declarative API Contracts to enable developers to efficiently design,
  communicate, and evolve their HTTP APIs, while automating API implementation
  phases where possible.
VertX Cont
RxJava Integration
- SockJS:
  - Event-bus bridge allowing web apps to seamlessly communicate over
  - event bus from JavaScript running in the browser by receiving and publishing
  - messsages just like any verticle would do
Angular Client
What's new
BºVert.X 4.0: (2020-12-09)º:
Other frameworks
- lightweight system for processing asynchronous jobs. 
- Use Cases:
  - website that needs to run batch process in background
  - We receive batch of inputs to be processed in a "best-effort"
    but replay as-soon-as-possible to client that we have the
    input batch ready for processing.

- easy-to-use "Wrapper" on top o Apache Artemis (ActiveMQ "next Gen")
  the Async adds an abstraction layer based on a Command Pattern,
  which makes it trivial to add asynchronous processing.

- Embedded broker instance with reasonable defaults

- Ex:
  Asyncºasyncº= new Async(            ←BºCREATE one or more Queuesº
            "/opt/project1",          ← place to store persistent messages
            new QueueConfig(
               "MESSAGES_QUEUE",      ← queue 1 name
               new CommandListener(), 
               5)                     ← number of listeners(threads) to
            new QueueConfig(
                  "ERRORS_QUEUE",     ← queue 2 name (No limit in the number o 
                  new ErrorListener(),                queues)

   public class HelloCommand             ←BºCreate a commandº
       extends Command {
       private String message;
       public HelloCommand(String message) {
         this.message = message;
       public HelloCommand() {}          ← Rºnecessary (forbid finals)º
       public void execute() {

   for(int i = 0; i < 100; i++){         ←Bºsending commands asyncº
        new HelloCommand("Hello Number "+ i));
   Output will be similar to
   → Hello Number 0
   → Hello Number 1
   → ...

   List topCommands             ←BºPeek (vs Consume) 3 "top"º
      = ºasyncº.getTopCommands(            commands from "ERROR_QUEUE"
           3, "ERROR_QUEUE"); 

- Commands can be read and processed synchronously (one-by-one)
  from an individual queue one at the time without a listener.
  Ex: Qºyou do not want to process errors automaticallyº. To do so:

  ErrorCommand errorCommand =           ←BºConsume messageº
  ... // Process manually

BºText vs Binary messagesº
  - To be compatible with JMS the  communication protocol is limited t
    - javax.jms.TextMessage   ← Default mode.
    - javax.jms.BytesMessage  ← async.setBinaryMode(true);

    In both cases, the Rºserialization of a command is first done to XMLº
    with the use of XStream.
  - If a given command has transient field that must NOT  be serialized,
    use the field annotation @XStreamOmitField to ignore it.

  -RºWARN:º Do not switch from mode to mode while having persistent 
            messages stored in your queues.

BºCommands with DB accessº
  - If queue processing requires a database connetion, DBCommandListener
    can be used:
    Async async = new Async(filePath, false, new 
        QueueConfig("MESSAGES_QUEUE", new 
        DBCommandListener(                 ← If JNDI connection is setup, the 
         "java:comp/env/jdbc/conn01"), 5)    listener will find and open it
   );                                        Use your tomcat/Jboss/... container
                                             documentation to set up it properly
Bº(Artemis) Config APIº
  - For complex app configuration, the underlying Artemis API
    can be used:
       artemisCOnfig = async.getConfig();

- See also filequeue in this map. It's faster but doesn't support
  Queue to DDBBs.
- KISS alternative using MVStore
- All producers and consumers run within a JVM.
- H2 MVStore DB used for storage.
- Queue items are BºPOJOs serialized into Json using jacksonº.
-Gºfaster than JavaLite due to performance shortcutº:
  -BºFile Queue will transfer queued items directly to consumersº
   Bºwithout hitting the database provided there are consumers  º
   Bºavailable, otherwise, message will be persistedº
-RºDoesn't support persistence to JNDI DDBBº
- fixed and exponential back-off retry is present.

  - maven/gradle package dependency:

  - Implement POJO extending FileQueueItem
  - Implement consume(FileQueueItem) on ˂˂Consumer˃˃ to process items
  - Instantiate a FileQueue object and call config() to configure
  - Call startQueue() to start the queue
  - Call stopQueue() to stop the queue processing
  - Call FileQueue.destroy() to shutdown all static threads (optional)

BºExample Implementation:º
  └ Queue Ussage example:
    FileQueue queue = FileQueue.fileQueue();
    FileQueue.Config config = FileQueue.
          new TestConsumer()
        .maxQueueSize(MAXQUEUESIZE)           // ← queueItem will block until an slot becomes 
                                                   available or ExceptionTimeout thrown
        .maxRetries(0);                       // ← Infinite retries
        .persistRetryDelay(                   // ← delay between DDBB scans.
    queue.startQueue(config);                 // ← Start queue
    for (int i = 0; i < ROUNDS; i++)
      queue.queueItem(                        // ← Submit items
        new TestFileQueueItem(i)
    queue.stopQueue();                        // ← stopQueue

  └ Consumer implementation:
    static class TestConsumer implements Consumer {
        public TestConsumer() { }
        public Result consume(FileQueueItem item) 
        throws InterruptedException {
                        try {
        TestFileQueueItem retryFileQueueItem = 
            (TestFileQueueItem) item;
        if (retryFileQueueItem.getTryCount() == RETRIES )
            return Result.SUCCESS;
        return Result.FAIL_REQUEUE;
                        } catch (Exception e) {
        logger.error(e.getMessage(), e);
        return Result.FAIL_NOQUEUE;

  └ FileQueueItem implementation:

    import com.stimulussoft.filequeue.*;
    static class TestFileQueueItem extends FileQueueItem {
      Integer id;
      public TestFileQueueItem() { super(); };
      private TestFileQueueItem(Integer id) {
          this.id = id;
      public String toString() { return String.valueOf(id); }
      public Integer getId() { return id; }
      public void setId(Integer id) { this.id = id; }

  └ File Caching:
    - If there is the need to cache a file to disk or perform resource 
      availability checks prior to items being placed on the queue, 
      implement availableSlot() on the QueueCallback interface. This method 
      is called as soon as a slot becomes available, just before the item 
      is place on the queue. It may be used to cache a file to disk, or 
      perform resource availability pre-checks (e.g. disk space check).
Cost of software failures
$312 billion per year: global cost of software bug (2013)
$300 billion dealing with the Y2K problem

$440 million loss by Knight Capital group Inc. in 30 minutes, August 2012
$650 million loss by NASA Mars missions in 1999; unit conversion bug
$500 million Arian 5 maiden flight in 1996; 64-bit to 16-bit conversion bug
"$Nightmare" billion Boeing 737Max

2011: Software caused 25% of all medical device recalls.
Checker framework
Java 8+
ºfix errors at compile timeº (vs later on at execution/runtime)

º COMPARED TO ALTERNATIVES (SpotBugs, Infer  Jlint, PMD,...)º
                 ┌─────────────┬────────┬────────┐                 ┌──────────────────┬─────────────────────┐
                 │ Null Pointer│        │        │                 │ Verification     │ Bug─Finding         │
                 │    errors   │ False  │Annotat.│                 │ (ºChecker FWº,)  │ (Infer,StopBugs,    │
                 │             │        │        │                 │                  │  SonarQube,...      │
                 │Found│ Missed│warnings│written │  ┌──────────────┼──────────────────┼─────────────────────┤
  ┌──────────────┼─────┼───────┼────────┼────────┤  │Goal          │ prove that       │ find some bugs      │
  │ºChecker FW.º │9    │ 9     │  4     │  35    │  │              │ no bug exits     │ at "low cost"       │
  ├──────────────┼─────┼───────┼────────┼────────┤  ├──────────────┼──────────────────┼─────────────────────┤
  │StopBugs      │0    │ 9     │  1     │  0     │  │Check specifis│ user provided    │ infer likely specs  │
  ├──────────────┼─────┼───────┼────────┼────────┤  │specificat    │                  │                     │
  │Jlint         │0    │ 9     │  8     │  0     │  ├──────────────┼──────────────────┼─────────────────────┤
  ├──────────────┼─────┼───────┼────────┼────────┤  │False         │ None!!!          │ acceptable          │
  │PMD           │0    │ 9     │  0     │  0     │  │negatives     │                  │                     │
  ├──────────────┼─────┼───────┼────────┼────────┤  ├──────────────┼──────────────────┼─────────────────────┤
  │Eclipse 2017  │0    │ 9     │  8     │  0     │  │False         │ manually supress │ heuristics focus on │
  ├──────────────┼─────┼───────┼────────┼────────┤  │positives     │ warnings         │ most important bugs │
  │IntelliJ      │0    │ 9     │  1     │  0     │  ├──────────────┼──────────────────┼─────────────────────┤
  │+@NotNull 2017│3    │ 6     │  1     │ 925+8  │  │Downside      │ user burden      │ missed bugs         │
  └──────────────┴─────┴───────┴────────┴────────┘  └──────────────┴──────────────────┴─────────────────────┘

RºPROBLEM:º                                          │BºSOLUTION:º
   STANDARD JAVA TYPE SYSTEM IS NOT GOOD ENOUGH      │  Java 8+ allows to compile programs
   - Next example compile, but fail at runtime:      │  using Oº"PLUGGABLE TYPE SYSTEMs"º,
     Ex.1:                                           │  allowing to apply stricter checks
       System.console().readLine(); ←RºNullPointerº  │  than default ones in compiler like
     Ex.2:                                           │  Ex:
       Collections.emptyList()                       │  $ javac º-processor NullnessCheckerº MyFile.java
               .add("one"); ←RºUnsupported Operationº│
     Ex.3:                                           │   PLUGABLE TYPE SYSTEM COMPILATION SCHEMA:
       Date key1 = new Date();                       │           (1)           No errors (2)
       myMap.put(key1, "now");                       │    Source ───→ Compiler ────┬───→ Executable
       myMap.get(key1);    ←  returns "now"          │      ^            │         │(2)       ^
       key1.setSeconds(0); ←RºMutate keyº            │      │            v         v          │
       myMap.get(key1);    ←Rºreturns nullº          │      │         Standard  OºOptionalº   │ Guaranteed
                                                     │      │         Compiler  OºType    º───┘ Behaviour
                                                     │      │         Errors    OºChecker º
                                                     │      │                      │
                                                     │      │                      v
                                                     │      └────────────────── Warnings :
                                                     │     (2) plugable type system allows generation
                                                     │         of executable to allow CI continue the
                                                     │         pipeline with further tests (functional
                                                     │         testing, configuration testing, ...)

º ___ _   _ ____ _____  _    _     _        _  _____ ___ ___  _   _ º
º|_ _| \ | / ___|_   _|/ \  | |   | |      / \|_   _|_ _/ _ \| \ | |º
º | ||  \| \___ \ | | / _ \ | |   | |     / _ \ | |  | | | | |  \| |º
º | || |\  |___) || |/ ___ \| |___| |___ / ___ \| |  | | |_| | |\  |º
º|___|_| \_|____/ |_/_/   \_\_____|_____/_/   \_\_| |___\___/|_| \_|º

(See new releases/versions at

 ºSTEP 01:º                           │ ºSTEP 02:º
  Add next pom.xml dependencies like: │  tweak ºmaven-compiler-pluginº to use
  ˂dependency˃                        │  Checker Framework as a pluggable Type System:
      ˂groupId˃                       │  ˂plugin˃
        org.checkerframework          │    ˂artifactId˃ºmaven-compiler-pluginº˂/artifactId˃
      ˂/groupId˃                      │    ˂version˃3.6.1˂/version˃
      ˂artifactId˃                    │    ˂configuration˃
        checker-qual                  │      ˂source˃1.8˂/source˃
      ˂/artifactId˃                   │      ˂target˃1.8˂/target˃
      ˂version˃2.11.0˂/version˃       │      ˂compilerArguments˃
  ˂/dependency˃                       │        ˂Xmaxerrs˃10000˂/Xmaxerrs˃
  ˂dependency˃                        │        ˂Xmaxwarns˃10000˂/Xmaxwarns˃
      ˂groupId˃                       │      ˂/compilerArguments˃
        org.checkerframework          │     º˂annotationProcessors˃º ← "==" javac -processor ...
      ˂/groupId˃                      │        ˂annotationProcessor˃
      ˂artifactId˃                    │      org.checkerframework.checker.nullness.NullnessChecker
        checker˂                      │         ˂/annotationProcessor˃
      /artifactId˃                    │         ˂annotationProcessor˃
      ˂version˃2.11.0˂/version˃       │      org.checkerframework.checker.interning.InterningChecker
  ˂/dependency˃                       │         ˂/annotationProcessor˃
  ˂dependency˃                        │         ˂annotationProcessor˃
      ˂groupId˃                       │      org.checkerframework.checker.fenum.FenumChecker
        org.checkerframework          │         ˂/annotationProcessor˃
      ˂/groupId˃                      │         ˂annotationProcessor˃
      ˂artifactId˃                    │      org.checkerframework.checker.formatter.FormatterChecker
        jdk8                          │         ˂/annotationProcessor˃
      ˂/artifactId˃                   │     º˂/annotationProcessors˃º
      ˂version˃2.11.0˂/version˃       │      ˂compilerArgs˃
  ˂/dependency˃                       │        ˂arg˃-AprintErrorStack˂/arg˃
                                      │        ˂arg˃-Awarns˂/arg˃
                                      │      ˂/compilerArgs˃
                                      │    ˂/configuration˃
                                      │  ˂/plugin˃

(ºSTEP 03:º Manually add extended type annotations to your java code)

º _   _ ____ ____    _    ____ _____ º
º| | | / ___/ ___|  / \  / ___| ____|º
º| | | \___ \___ \ / _ \| |  _|  _|  º
º| |_| |___) |__) / ___ \ |_| | |___ º
º \___/|____/____/_/   \_\____|_____|º

- BºAvoiding Nullsº

 ºCHECKS  ON TYPESº                              │ºCHECKS ON FUNCTION DECLARATIONº
                                                 │                   ┌────┬────┬───────────────────────────┐
                                                 │                   │FUNC│FUNC│DESCRIPTION                │
  private static int func1                       │                   │PRE─│POST│                           │
    (º@NonNullº String[] args)                   │                   │COND│COND│                           │
  {                                              │ ┌─────────────────┼────┼────┼───────────────────────────┤
      return args.length;                        │ │@RequiresNonNull │X   │    │variables areºexpectedº to │
  }                                              │ │                 │    │    │be non─null when invoked.  │
                                                 │ ├─────────────────┼────┼────┼───────────────────────────┤
  public static void main                        │ │@EnsuresNonNull  │    │X   │variables areºguaranteedºto│
    (º@Nullableº String[] args) {                │ │                 │    │    │be non─null on return.     │
      ...                                        │ ├─────────────────┼────┼────┼───────────────────────────┤
      func1(args);                               │ │@EnsuresNonNullIf│    │X   │variables areºguaranteedºto│
  }         ^^^^                                 │ │                 │    │    │benon─null on ret.true/fals│
      [WARNING] ... [argument.type.incompatible] │ └─────────────────┴────┴────┴───────────────────────────┘
       incompatible types in argument.           │
       ºfound    : nullº                         │
       ºrequiredº: @Initializedº@NonNullº...     │

- BºConvert String constants into Safe Enum with Fenumº
                                                (Fake enum)
  static final @Fenum("country") String ITALY = "IT";
  static final @Fenum("country") String US = "US";
  static final @Fenum("planet") String MARS = "Mars";
  static final @Fenum("planet") String EARTH = "Earth";

  void function1(@Fenum("planet") String inputPlanet){
      System.out.println("Hello " + planet);

  public static void main(String[] args) {
      obj.greetPlanets(US);   ←----  [WARNING] ...
  }                                   incompatible types in argument.
                                       found   : @Fenum("country") String
                                       required: @Fenum("planet") String

- BºRegular Expressionsº
  @Regex(1) private static String FIND_NUMBERS = "\\d*";
  ^^^^^^^^^                                      ^^^^^^
  Force String variable                       [WARNING] ...
  to store a regex with                       incompatible types in assignment.
  at least one matching                         found   : @Regex String
  group                                         required: @Regex(1) String

- BºValidating tainted (non-trusted) inputº

   String validate (String sqlInput) {
      // Do any suitable checks, throw on error
      @SuppressWarnings("tainting")      ← "swear" that developer got sure
      @Untainted String result = ...;       of input correctness
      return result;

  void execSQL(º@Untaintedº String sqlInput) {

  public static void main(String[] args) {
      obj.execSQL(arg[0]);             ← warning at compile time
      obj.execSQL(validate(arg[0]));   ← "OK". validate un-tain the input

- BºMark as Immutableº
 º@ImmutableºDate date = new Date();
  date.setSeconds(0);   ← Rºcompile-time errorº

-ºAvoiding (certain) concurrency errorsº

  Lock Checker enforces a locking discipline:
  "which locks must be held when a given operation occurs"

                                              │                 ┌────┬────┬───────────────────────────┐
  º@GuardedBy("lockexpr1","lockexpr2",...)º   │                 │FUNC│FUNC│DESCRIPTION                │
             int var1 = ....;                 │                 │PRE─│POST│                           │
   ^^^^^^^^^^                                 │                 │COND│COND│                           │
  a thread may dereference the value referred │┌────────────────┼────┼────┼───────────────────────────┤
  to by var1 only when the thread holds all   ││@Holding        │X   │    │All the given lock exprs   │
  the locks that ["lockexpr1",...] currently  ││(String[] locks)│    │    │are held at method call    │
  evaluates to.                               │├────────────────┼────┼────┼───────────────────────────┤
                                              ││@EnsuresLockHeld│    │X   │Ensures locks are locked on│
                                              ││(String[] locks)│    │    │return,ex. lock adquired by│
                                              ││                │    │    │ReentrantLock.lock().      │
                                              ││@EnsuresLockHeld│    │X   │Ensures locks are locked on│
                                              ││(String[] locks)│    │    │return,ex.lock conditionaly│
                                              ││                │    │    │adquired by ReentrantLock  │
                                              ││                │    │    │.lock()                    │
                                              ││                │    │    │if method return true|false│
  │º@LockingFreeº      │method does NOT acquire│release locks:         │
  │                    │· it is not synchronized,                      │
  │                    │· it contains NO synchronized blocks           │
  │                    │· it contains no calls to lock│unlock methods  │
  │                    │· it contains no calls to methods that are not │
  │                    │  themselves @LockingFree                      │
  │                    │(@SideEffectFree implies @LockingFree)         │
  │º@ReleasesNoLocksº  │· method maintains a strictly                  │
  │                    │  nondecreasing lock hold count                │
  │                    │  on the current thread for any locks          │
  │                    │  held at method call.                         │
  │º@EnsuresLockHeldº  │method adquires new locsk                      │
  │º@EnsuresLockHeldIfº│(default if no @LockingFree│@MayReleaseLocks│  │
  │                    │@SideEffectFree│@Pure used).                   │

-BºFormat String Checkerº
  - prevents use of incorrect format strings in System.out.printf,....

    void printFloatAndInt
         (º@Format({FLOAT, INT})º String Oºformatº)
      System.out.printf(Oºformatº, 3.1415, 42);
-ºI18n Format Checker examplesº
  MessageFormat.format("{0} {1}", 3.1415);
                              second argument missing
  MessageFormat.format("{0, time}", "my string");
                                    cannot be formatted
                                    as Time type.
  MessageFormat.format("{0, thyme}", new Date());
                            unknown format type

  MessageFormat.format("{0, number, #.#.#}", 3.1415);
                              subformat is invalid.

-ºProperty File Checker!!!!º RºTODOº
  -ºIt ensures that used keys are found in the corresponding º
   ºproperty file or resource bundle.º

-ºGUI Effect Checkerº
  - It is difficult for a programmer to remember
    which methods may be called on which thread(s).
    (Main GUI thread or others)
   Checker types the method as if:
   - It accesses no UI elements (and may run on any thread);
   - It may access UI elements  (and must run on the UI thread)

-º(physical) Internation System UNIT annotationsº:
  @Acceleration: Meter Per Second Square @mPERs2
  @Angle       : Radians @radians
                 Degrees @degrees
  @Area        : square millimeters @mm2,
                 square meters @m2
                 square kilometers @km2
  @Current     : Ampere @A
  @Length      : Meters @m
                 millimeters @mm
                 kilometers @km
  @Luminance   : Candela @cd
  @Mass        : kilograms @kg
                     grams @g
  @Speed       : meters per second   @m
                 kilometers per hour @kmPERh
  @Substance   : Mole @mol
  @Temperature : Kelvin @K
                 Celsius @C
  @Time        : seconds @s
                 minutes @min
                 hours @h

-º@Unsigned/@Signedº← guarantees values are not mixed

-ºtype alias or typedefº
  share same representation as another type
  but is conceptually distinct from it.
  Ex 1: get sure that Strings representing addresses
        and passwords are NOT mixed
  Ex 2: get sure that integers used for meters are
        not mixed with integers used for centimeters.

  @NonNull List˂String˃
  List˂@NonNull String˃
  @Regex String validation = "(Java|JDK) [7,8]"

  private String getInput(String parameterName){
   final String retval = @Tainted request.getParameter(parameterName);
   return retval;

  private void runCommand(@Untainted String… commands){
   // the previously tainted String must be validated before being passed in here.
   ProcessBuilder processBuilder = new ProcessBuilder(command);
   Process process = processBuilder.start();

SPARTA (anti-malware)
- Aimed at preventing malware from appearing in an app store.
- provides an information-flow type-checker customized to Android
  but can also be applied to other domains.
  The paper "Collaborative verification of information flow for a
  high-assurance app store" appeared in CCS 2014.
-----------------------+-------------------------------------+-------------------------+-------   Example
Tag⅋ Parameter         | Usage                               | Applies to              | Since    /**
-----------------------+-------------------------------------+-------------------------+-------    * Short one line description.
@authorJohn Smith      | Describes an author.                | Class, Interface, Enum  |           * 

-----------------------+-------------------------------------+-------------------------+------- * Longer description. ... @versionversion | Provides software version entry. | Class, Interface, Enum | * ...here. | Max one per Class or Interface. | | *

-----------------------+-------------------------------------+-------------------------+------- * And even more explanations to follow @sincesince-text | Describes when this functionality | Class, Interface, Enum, | * in consecutive paragraphs | has first existed. | Field, Method | * -----------------------+-------------------------------------+-------------------------+------- * @author John Bla @seereference | Provides a link to other element | Class, Interface, Enum, | * @param variable Description .... | of documentation. | Field, Method | * @return Description .... -----------------------+-------------------------------------+-------------------------+------- */ @paramname descrip | Describes a method parameter. | Method | public int methodName (...) { -----------------------+-------------------------------------+-------------------------+------- // method body with a return statement @return description | Describes the return value. | Method | } -----------------------+-------------------------------------+-------------------------+------- @exceptionclass desc | Describes an exception that may | Method | -----------------------+-------------------------------------+-------------------------+------- @throwsclass desc | be thrown from this method. | | -----------------------+-------------------------------------+-------------------------+------- @deprecated descr | Describes an outdated method. | Class, Interface, Enum, | | | Field, Method | -----------------------+-------------------------------------+-------------------------+------- {@inheritDoc} | Copies the description from the | Overriding Method | 1.4.0 | overridden method. | | -----------------------+-------------------------------------+-------------------------+------- {@linkreference} | Link to other symbol. | Class, Interface, Enum, | | | Field, Method | -----------------------+-------------------------------------+-------------------------+------- {@value#STATIC_FIELD} | Return the value of static field. | Static Field | 1.4.0 -----------------------+-------------------------------------+-------------------------+------- {@codeliteral} | Formats literal text in the code | Class, Interface, Enum, | 1.5.0 | font. It is equivalent to | Field, Method | | {@literal} | Class, Interface, Enum, | 1.5.0 -----------------------+-------------------------------------+-------------------------+------- {@literalliteral} | Denotes literal text. The enclosed | Field, Method | | text is interpreted as not | | | containing HTML markup or nested | | | javadoc tags. | | -----------------------+-------------------------------------+-------------------------+-------


- alternatives to SonarQube include:
  - Facebook Infer @[http://fbinfer.com/]
    (Static analysis Java/C/...)
  - Scrutinizer:
  - StopBugs:
  - Eclipse Static Code Analasys:
    Eclipse → Properties → Java → Compiler → Errors/Warnings → Null analysis:
      Null pointer access
      Potential null pointer access
      Redundant null check:
        x Include 'assert' in null analysis
        x Enable annotation-based null analysis
          Violation of null specification
          Conflict between null annotations an null inference
          Unchecked conversion from non-annotated type to @NonNull type
          Problems detected by pessimistic analysis fro free type variables
          Unsafe "@Nonnull" interpretation of the free type variable from library
          Redundant null anotation:
          "@NonNull" parametere not annotated in overriding method
          Missing "@NonNullByDefault" annotation on package
          x Use default annotations for null specifications (configure)
          x Inherit null annotations
          x Enable syntatic null analisys for fields
      x Treat above errors like fatal compile erros (make compiled code not executable)
UUID: bdf38754-3114-4343-a768-e8d24027f91b

 What Is It?
  JDepend traverses Java class and source file directories and
  generatesºdesign-quality-metrics for each Java packageº
 ºin terms of its extensibility, reusability, and maintainabilityº
 ºto effectively manage and control package dependencies.º

AsserJ (Fluent Assertions) AssertJ is composed of several modules: - A core module to provide assertions for JDK types (String, Iterable, Stream, Path, File, Map…​) - A Guava module to provide assertions for Guava types (Multimap, Optional…​) - A Joda Time module to provide assertions for Joda Time types (DateTime, LocalDateTime) - A Neo4J module to provide assertions for Neo4J types (Path, Node, Relationship…​) - A DB module to provide assertions for relational database types (Table, Row, Column…​) - A Swing module provides a simple and intuitive API for functional testing of Swing user interfaces // entry point for all assertThat methods and utility methods (e.g. entry) import static org.assertj.core.api.Assertions.*; assertThat(frodo.getName()).isEqualTo("Frodo"); // ← basic assertions assertThat(frodo).isNotEqualTo(sauron); assertThat(frodo.getName()) // ← chaining string specific assertions .startsWith("Fro") .endsWith("do") .isEqualToIgnoringCase("frodo"); assertThat(fellowshipOfTheRingList) // ← collection specific assertions .hasSize(9) // (there are plenty more) .contains(frodo, sam) .doesNotContain(sauron); assertThat(frodo.getAge()) .as("check %s's age", frodo.getName()) // ← as() used to describe the test .isEqualTo(33); // will be shown before the error message assertThatThrownBy(() -˃ { // ← exception assertion ( standard style) throw new Exception("boom!"); }) .hasMessage("boom!"); Throwable thrown = catchThrowable(() -˃ { // ← exception assertion ( BDD style) throw new Exception("boom!"); }); assertThat(thrown).hasMessageContaining("boom"); assertThat(fellowshipOfTheRingList) .extracting(TolkienCharacter::getName) // ← 'extracting' feature on Collection .doesNotContain("Sauron", "Elrond"); // assertThat(fellowshipOfTheRingList) .extracting("name", "age", "race.name") // extracting multiple values at once grouped in tuples .contains( tuple("Boromir", 37, "Man" ), tuple("Sam" , 38, "Hobbit"), tuple("Legolas", 1000, "Elf" ) ); assertThat(fellowshipOfTheRingList) .filteredOn( // ← filtering before asserting fellow -˃ fellow.getName().contains("o") ) .containsOnly(aragorn, frodo); assertThat(fellowshipOfTheRingList) .filteredOn( // combining filtering and extraction fellow -˃ fellow.getName().contains("o") ) .containsOnly(aragorn, frodo) .extracting( fellow -˃ fellow.getRace().getName()) .contains("Hobbit", "Elf"); // and many more assertions: // iterable, stream, array, map, dates, path, file, numbers, predicate, optional ...
Amazon CodeGuru
- Powered by IA.
- CodeGuru consists of two components
– Amazon CodeGuru Profiler:
helps developers find an application’s most expensive lines
of code along with specific visualizations and recommendations 
on how to improve code to save money.
- Amazon CodeGuru Reviewer: 
helps enhance the quality of code by scanning for critical issues, 
identifying bugs, and recommending how to remediate them.

  ┌→ Write Code
  |    |
  |    v
  |  Review Code  ← CodeGuru Reviewer
  |    |
  |    v
  |  Test App     ← CodeGuru Profiler
  |    |
  |    v
  |  Deploy App
  |    |
  |    v
  |  Run App      ← CodeGuru Profiler
  |    |

- Profiler supports application written
in Java virtual machine (JVM) languages such as Clojure,
JRuby, Jython, Groovy, Kotlin, Scala, and Java.
- Reviewer’s bug-fixing recommendations currently support
Java code stored in GitHub, AWS CodeCommit, or Bitbucket. 
- (compiler) checked vs unchecked (Error, RuntimeException and their subclasses).
- Checked: All except Error, RuntimeException and their subclasses
- Error: Exceptional conditions external to the application.
└─ java.lang.Throwable   ← Only instances of this (sub/)class are thrown
│                       in JVM, can be thrown in throw statement or can
│                       be an argument in catch clause.
├─   java.lang.Exception
│    │
│    ├─Oºjava.lang.RuntimeExceptionº(non─checked)  ← Most common error raised by
│    │                                               developer code
│    │
│    └─  java.lang.*Exception       (checked ─A)   ←RºDon't useº. checked exceptions end up
│                                                     being converted to Runtime Excep.
│                                                     and bloats the code.
└─   java.lang.Error                (non─checked)  ← serious problems that app code
                                                    should not try to catch.
                                                    ThreadDeath error, though a "normal" condition,
                                                    is also a subclass of Error because most apps
                                                    should not try to catch it.

ºDump Exception stack trace to STDERR:º
StringWriter writer = new StringWriter();
PrintWriter printWriter = new PrintWriter( writer );
e.printStackTrace( printWriter );

"Optional": Avoid Nulls
import java.util.Optional;
Optional optional = Optional.ofNullable(a); // ← Create an optional
optional.map ( s -˃ "RebelLabs:" + s);               // ← Process the optional
optional.flatMap( s -˃ Optional.ofNullable(s));      // ← map a function that retunrs Optional
optional.ifPresent(System.out::println);             // ← run if the value is ther

optional.get();                                      // ← Alt 1: get the value or throw an exception
optional.orElse("Hello world!");                     // ← Alt 2: get the value or default

optional.filter( s -˃ s.startsWith("RebelLabs"));    // ← return empty Optional if not satisfied
JSR Annotations
 foRºDefect Detectionº
Type Annotations
TODO: Compare how it compares/overlaps CheckerFramework

 º@NonNullº     compiler can determine cases where a      │º@(Un)Taintedº         Identity types of data that should
                code path might receive a null value,     │                       not be used together, such as remote
                without ever having to debug a            │                       user input being used in system
                NullPointerException. The compiler        │                       commands, or sensitive information in
                just print a warning, but it              │                       log streams
                continues to compile!!!                   │
                                                          │º@mº                   Units of measure ensures that numbers
 º@ReadOnlyº    compiler will flag any attempt to         │                       used for measuring objects are used
                change the object. This is similar to     │                       and compared correctly, or have
                Collections.unmodifiableList, but         │                       undergone the proper unit
                more general and verified at compile time.│                       conversion.
 º@Regexº       Provides compile-time verification        │º@FunctionalInterfaceº indicates that the type declaration
                that a String intended to be used as      │                       is intended to be a functional
                a regular expression is a properly        │                       interface, as defined by the Java
                formatted regular expression.             │                       Language Spec.
└ ºExamplesº:
  @NonNull List                              ← A non-null list of Strings.
  List<@NonNull String>                              ← A list of non-null Strings.
  @Regex String validation = "(Java|JDK) [7,8]"      ← Check at compile time that this String is a valid regular expression.
  private String getInput(String parameterName){     ← The object assigned to retval is tainted and not for use in sensitive operations.
    final String retval =
      @Tainted request.getParameter(parameterName);
    return retval;

  private void runCommand(@Untainted String… commands){            Each command must be untainted. For example, the previously
    ProcessBuilder processBuilder = new ProcessBuilder(command);   tainted String must be validated before being passed in here.
    Process process = processBuilder.start();
Annotation processors(Data Binding,Lombok,...)
See JSR 269 
Dependency injection
See JSR 330
A: t's important to realize that Dagger was created after Guice, by one
of Guice's creators ("Crazy Bob" Lee) after his move to Square:
- Spring was originally released in October 2002.
- Google originally publicly released Guice in March 2007.
- JSR-330 formalized javax.inject annotations in October 2009,
with heavy input from Google (Bob Lee), Spring, and other industry
- Square originally released Dagger 1 publicly in May 2013.
- Google originally released Dagger 2 publicly in April 2015.
- Square marked Dagger 1 as deprecated 10 days ago,
on September 15, 2016.
 JBehave is a framework for Behaviour-Driven Development (BDD). BDD 
is an evolution of test-driven development (TDD) and acceptance-test 
driven design, and is intended to make these practices more 
accessible and intuitive to newcomers and experts alike. It shifts 
the vocabulary from being test-based to behaviour-based, and 
positions itself as a design philosophy.

1 Write story
  Scenario: A trader is alerted of status
  Given a stock and a threshold of 15.0
  When stock is traded at 5.0
  Then the alert status should be OFF
  When stock is traded at 16.0
  Then the alert status should be ON

2 Map to java

3 Configure Stories

4 Run Stories
 Lint4j ("Lint for Java") is a static Java source and byte code
analyzer that detects locking and threading issues, performance and
scalability problems, and checks complex contracts such as Java
serialization by performing type, data flow, and lock graph analysis.
External Links
- PUBLIC REPOSITORY:      @[https://mvnrepository.com/]
- Artifact Search Engine: @[http://search.maven.org/]
- Online doc from Maven : @[https://www.mvndoc.com]
Ex: Show Bouncy Castle doc by:
 - full index   : @[https://www.mvndoc.com/c/org.bouncycastle/bcpg-jdk15/index-all.html]
 - by package   : @[https://www.mvndoc.com/c/org.bouncycastle/bcpg-jdk15/index.html]


- unit of work.
- A goal accepts configuration properties (parameters) to customize
its run-time behavior
- Ex: Compiler:compile defines a set of parameters to specify target
JDK version or switching on/off compiler optimizations
- An ºordered listº of goals can be attached to a Bºlifecycle phaseº.
                             Executes next goals in order
mvn Bºpackageº  | resources:resources →
  ^Lifecicle  | compiler:compile →
              | resources:testResources →
              | compiler:testCompile →
              | surefire:test jar:jar

Q: What exactly is a Maven Snapshot and why do we need it?
A:☞A snapshot version is one that has not been released (Oºfuture releaseº).
 The idea is that ºbeforeº a "1.0" release is done, there exists
 a 1.0Oº-SNAPSHOTº. That version is what might become 1.0. It's
 basically Oº"1.0 under development"º. This might be close to a real
 1.0 release, or pretty far (right after the 0.9 release, for ex.)

 The difference between a "real" version and a snapshot version is
 that ºsnapshots might get updatesº.  That means that downloading
 1.0-SNAPSHOT today might give a different file than downloading it
 yesterday or tomorrow.
 In contrast OºReleased versions are inmutablesº:
 updates to "1.0.0" requires new version "1.0.1".

 Snapshot dependencies should only exist during development.
ºReleased versions (i.e. no non-snapshot) should NEVER have aº
ºdependency on snapshotsº


BºStandard Lifecyclesº
| process-resources         | TODO            | TODO
|  compile
|   process-classes
|    process-test-resources
|     test-compile
|      test
|       prepare-package
|        package
|         install

BºCompiling & Installingº
Command                           Description
$ mvn clean                         Remove the (current_working_directory/)target folder
$ mvn compile                       (goal list: clean→compile)
$ mvn clean package                 Compiles and generates the (JAR/WAR/...) package
$ mvn clean install                 (goal list: clean→compile→test→package→install_local )
                                # : skip tests
$ mvn clean install \               (goal list: clean→compile→     package→install_local )
$ mvn clean deploy                  Compile... and installs into remote ("corporate") server
                                ( clean→compile→test→package→install_local→install_pub )
Common options:
–U                   Force library (download) update
–P myProfileX        Execute profile myProfileX
–o                   offline mode. Search deps in local repo.
-Dgenerate.pom=true  Generates the pom locally for an artefact when installing
                  and compiling. Very useful to make offilne mode work properly.
help:active-profiles   : List (project|user|global)
                    active profile for the build
help:effective-pom     : Displays effective POM for
                      current build
help:effective-settings: Prints calculated settings
help:describe groupId artifactId: Describes plugin attributes

mvn exec Main

$ mvn  -Dexec.args="arg0 ..." ºexec:javaº -Dexec.mainClass="com.example.Main"
ºReuse constants:º
${project.groupId}         ← Reuse project groupId in POM
${project.artifactId}      ← Reuse project artifactId in POM
${project.version}         ← Reuse project version in POM

º${my.property.name}º ← Reuse properties in POM

˂project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"˃
º˂!-- project coordinates { --˃º
º˂!-- packaging value := jar|war|ear|pom|maven-plugin|ejb|rar|par|aar|apklib|... --˃º
º˂!-- See @[https://semver.org/] (Semantic Versioning) for more info                     º
º  Version X.W.Z                                                                         º
º   X MAJOR  Must differ for non-compatible API changes,                                 º
º   W MINOR  W increase with new functionality backwards-compatible with X.(W-1) version º
º   Z PATCH  backwards-compatible bug fixes that does NOT add new functionality          º
˂version˃1.0-SNAPSHOT˂/version˃ º˂!-- Remember: SNAPSHOT refers to development/future release --˃º
º˂!-- } --˃º

º˂!-- Typical dependencies for in-memory data-collection handling { --˃º
º˂!-- } --˃º

º˂!-- Typical dependencies for logging { --˃º
        ˂!-- Avoid problem:
        SLF4J: Class path contains multiple SLF4J bindings.
        SLF4J: Found binding in [jar:file:~/.m2/repository/org/slf4j/slf4j-jdk14/1.7.21/slf4j-jdk14-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
        SLF4J: Found binding in [jar:file:~/.m2/repository/ch/qos/logback/logback-classic/1.1.7/logback-classic-1.1.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
º˂!-- } --˃º

º˂!-- Typical dependencies for JSON handling { --˃º



º˂!-- } --˃º

º˂!-- test dependencies (Not included in production system) { --˃º
º˂!-- Unit test dependencies { --˃º
º˂!-- } --˃º

º˂!--  REST API (Functional) test dependencies { --˃º
º˂!-- } --˃º
º˂ } --˃º


Reactor(POM aggregate vs inherit) Maven supports both project inheritance (set a parent project) and aggregation (reactor mechanism). MVN reactor allows to execute a goal (build,...) over a set of projects. The reactor will determine the build order according to defined dependencies on each pom. MVN 2 improved reactor making it transparent to users. Anyway a plugin exists to customize the interaction with the reactor: maven-reactor-plugin If we launch compilation in a multi-module (like AbsisParentPom) and something goes wrong, we can always re-start from the last failed module with the '--resume-from' option like: $ mvn --resume-from=com.myCompany.myModule:MyArtifact clean install -P myProfile -DskipTests=true
Dependency Management
mvn dependency:analyze
list two things:
Dependencies used but not declared.
If found in the parent pom, there is no
problem when compiling, but must be
included at runtime on the server.

Dependencies declared but not used for
the scope provided (compile, provided…).
They can be in the parent pom too.
Noneless, can be needed at runtime.

mvn dependency:tree
Ex. Ussage:
$ mvn dependency:tree -Dscope=compile
                     do not show test/provided/... dependencies
→ ...
→ [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ tamperproof ---
→ [INFO]ºcom.myCompany:myProject:jar:1.0-SNAPSHOTº
→ [INFO] +- org.web3j:core:jar:4.3.0:compile
→ [INFO] |  +- org.web3j:abi:jar:4.3.0:compile
→ [INFO] |  |  \- org.web3j:utils:jar:4.3.0:compile
→ [INFO] |  |     \- org.bouncycastle:bcprov-jdk15on:jar:1.60:compile
→ [INFO] |  +- org.web3j:crypto:jar:4.3.0:compile
→ [INFO] |  |  \- org.web3j:rlp:jar:4.3.0:compile
→ [INFO] |  +- org.web3j:tuples:jar:4.3.0:compile
→ [INFO] |  +- com.github.jnr:jnr-unixsocket:jar:0.21:compile
→ [INFO] |  |  +- com.github.jnr:jnr-ffi:jar:2.1.9:compile
→ [INFO] |  |  |  +- com.github.jnr:jffi:jar:1.2.17:compile
→ [INFO] |  |  |  +- org.ow2.asm:asm:jar:5.0.3:compile
→ [INFO] |  |  |  +- org.ow2.asm:asm-commons:jar:5.0.3:compile
→ [INFO] |  |  |  +- org.ow2.asm:asm-analysis:jar:5.0.3:compile
→ [INFO] |  |  |  +- org.ow2.asm:asm-tree:jar:5.0.3:compile
→ [INFO] |  |  |  +- org.ow2.asm:asm-util:jar:5.0.3:compile
→ [INFO] |  |  |  +- com.github.jnr:jnr-a64asm:jar:1.0.0:compile
→ [INFO] |  |  |  \- com.github.jnr:jnr-x86asm:jar:1.0.2:compile
→ [INFO] |  |  +- com.github.jnr:jnr-constants:jar:0.9.11:compile
→ [INFO] |  |  +- com.github.jnr:jnr-enxio:jar:0.19:compile
→ [INFO] |  |  \- com.github.jnr:jnr-posix:jar:3.0.47:compile
→ ...


- JDepend traverses Java class and source file directories and
generatesºdesign-quality-metrics for each Java packageº
ºin terms of its extensibility, reusability, and maintainabilityº
ºto effectively manage and control package dependencies.º
  REF: @[http://geertschuring.wordpress.com/2011/02/23/maven-best-practices/]

- update pom dependency to latest version
  Ex. Update org.checkerframework.*:
$º$ mvn versions:use-latest-versions -Dincludes="org.checkerframework:*"º

- Replace references like ${artifactId} or ${pom.artifactId} with new
  º${project.artifactId}ºsyntax. This syntax follows the XML document
  structure, making it easy to remember and predict the value that the
  reference will result in.

- Try to avoid using inherited properties. Developers can easily forget
  that a certain property is used by a child POM and change the value
  breaking the build in an unexpected place. Secondly, its quite annoying
  not to be able to easily lookup a property without having to find
  and examine the parent POM.

- Use the dependency management section of the parent pom to define
  all dependency versions, but do not set a scope here so that all
  dependencies have scope compile by default.

- Use properties to define the dependency versions. This way you can
  get an overview of all versions quickly.

- Use the pluginmanagement section of parent pom to define versions for
  ºallº plugins that your build uses, even standard maven plugins like
  maven-compile-plugin and maven-source-plugin. This way your build will
  not suddenly behave differently when a new version of a plugin is released.

- When using a parent POM not located in the directory directly above
  the current POM define an empty relativePath element in your parent section.

- Use the dependency plugin to check your project for both unnecessary
  dependencies and undeclared-but-used-none-the-less dependencies.
  The goal is called ‘analyze’:
$º$ mvn dependency:analyze  º

- Make sure the pom files contain all the repository references needed
  to download all dependencies. If you want to use a local repository
  instead of downloadin strait from the internet then use the maven
  settings file to define mirrors for the individual repositories that
  are defined in the poms.

- If you use Nexus, then do not create repository groups containing both
  hosted and proxied repositories. This will dramaticly reduce the
  responsiveness because Nexus will check the remote locations of the
  proxied repositories even if a hosted repository contains the requested

$º$ mvn archetype:generate -DgroupId=my.groupId \        º
$º$   -DartifactId=myArtifact \                          º
$º$   -DarchetypeArtifactId=maven-archetype-quickstart \ º
$º$   -DinteractiveMode=false                            º

   (Very useful to skip slow/non-important goals like doc, style-checks,...)
$º$ mvn fr.jcgay.maven.plugins:buildplan-maven-plugin:list \ º
$º  -Dbuildplan.tasks=install º

BºQUICK LOCAL INSTALLº (bypass tests/style-checks/...)
$º$ mvn resources:resources \ compiler:compile \       º
$º   jar:jar \  install:install                        º

  $ mvn help:evaluate -q -DforceStdout \  ← Artifact IDº
  $ mvn help:evaluate -q -DforceStdout \  ← Extract Group ID
  $ mvn help:evaluate -q -DforceStdout \  ← Extract Version 

  Useful when some tool auto-generates code outside the standard src/main/java diretory.
BºADD LOCAL JAR DEPENDENCYº (vs. maven central repository)
  RºWARNº: discouraged, but sometimes needed
    ˂groupId˃   ...˂/groupId˃
    ˂version˃   ...˂/version˃
    ˂systemPath˃/local/path/in/my/file/system/myjar.jar˂/systemPath˃ ←

BºGENERATE FAT JAR (jar with all dependencies included)º
  └ STEP 1: Add next profile lines to pom.xml with a fatjar profile containing 
            the custom options for the maven-assembly-plugin:
       º    ˂property˃ ˂name˃fatjar˂/name˃ ˂/property˃º
                   º˂execution˃               º
                   º    ˂id˃make-assembly˂/id˃º˂!-- this is used for inheritance merges --˃
                   º    ˂phase˃package˂/phase˃º˂!-- bind to the packaging phase --˃
                   º    ˂goals˃               º
                   º        ˂goal˃single˂/goalº˃
                   º    ˂/goals˃              º
                   º˂/execution˃              º
  └ STEP 2: Package with new profile active like:
  $º$ mvn clean compile package -Dmaven.test.skip=true -P fatjarº
                                                       active profile
set Parent+Child poms
- Allows to inherit project dependency in children projects

  .../parent/pom.xml              │ .../parent/child1/pom.xml
  ─────────────────────────────── │ ──────────────────────────────────
  ˂modelVersion˃4.0.0             │ ˂parent˃
  ˂/modelVersion˃                 │   ˂groupId˃...˂/groupId˃
  ˂groupId˃....˂/groupId˃         │   ˂artifactId˃parent˂/artifactId˃
  ˂artifactId˃parent˂/artifactId˃ │   ˂version˃1˂/version˃
  ˂version˃0.1.0˂/version˃        │   ˂relativePath˃
  ˂packaging˃pom˂/packaging˃      │      ../pom.xml˂/relativePath˃
                                  │ ˂/parent˃
                                  │ ˂dependecies˃
  ˂modules˃                       │   ˂dependency˃
  ˂module˃./child1˂/module˃       │     ˂groupId˃...˂/groupId˃
  ˂module˃./child2˂/module˃       │     ˂artifactId˃...˂/artifactId˃
  ˂/modules˃                      │   ˂/dependency˃...
                                  │ ˂/dependecies˃
        ˂version˃X.Y.Z˂/version˃    ← NOTE: no need to repeat version/scope in childs          
        ˂scope˃compile˂/scope˃      ←

Install non-mavenized jar

$º$ mvn install:install-file -Dfile=path_to_local_file -DgroupId=˂groupId˃ \ º
$º  -DartifactId=˂artifactId˃ -Dversion=˂version˃ -Dpackaging=˂packaging˃    º
Maven Central
Publish artifacts to MVN Central
  - @[http://maven.apache.org/repository/guide-central-repository-upload.html]
  - @[http://central.sonatype.org/pages/working-with-pgp-signatures.html]
  - For a quick guide of OpenPGP How-To with GNU Privacy Guard check:
- Requirements @[http://central.sonatype.org/pages/requirements.html]
  Prepare pom.xml properly:
  ˂?xml version="1.0" encoding="UTF-8"?˃
  ˂project xmlns="http://maven.apache.org/POM/4.0.0"
    xsi:schemaLocation="... http://.../maven-v4_0_0.xsd"˃
    ˂groupId˃com.u1.training˂/groupId˃     ← Prepare valid coordinates
    ˂version˃1.0˂/version˃                 ← Bºsnapshots not allowedº (Recheck)
        ˂name˃Apache Software License, Version 2.0˂/name˃
        ˂name˃First_Name Second_Name˂/name˃
        ˂organization˃Mock Corp˂/organization˃
    """ we discourage the usage of ˂repositories˃ and         ← Oº[QA]º
        ˂pluginRepositories˃ and instead publish any required
        components to the Central Repository """

Bºrequired files:º
  cat artifact01-1.4.7.pom            | gpg2 -ab -o artifact01─1.4.7.pom.asc         *2
  cat artifact01-1.4.7.jar            | gpg2 -ab -o artifact01─1.4.7.jar.asc         *2
  cat artifact01-1.4.7-sources.jar *1 | gpg2 -ab -o artifact01─1.4.7─sources.jar.asc *2
  cat artifact01-1.4.7-javadoc.jar *1 | gpg2 -ab -o artifact01─1.4.7─javadoc.jar.asc *2
  cat └──┬─────┘ └─┬─┘              ^               └──────────────┬───────────────┘  ^
      artifactId version            │                      GPG signatures *.asc       │
                                    │                                                 │
                                   *1: required except for pom (vs jar) packages      │
                                   *2: Verify sign. like $º$ gpg2 --verify ...asc ────┘

- Build tool integration

Publish Best patterns
- Use approved repository hosting location:
Apache Software Foundation (for all Apache projects)
FuseSource Forge (focused on FUSE related projects)

- User automatic publication in "forges" that provide hosting services

- OSS Repository Hosting
- Approved repository provided by Sonatype for OSS Project that want to
get their artifacts into Central Repository.
- Open an account as explained at

Post-namespace registration
(e-mail received after Namespace correct registration)
[ Issue@Jira ]
Thad Watson resolved OSSRH-39644: Resolution: Fixed

Configuration has been prepared, now you can:
→ Deploy snapshot artifacts into repository

→ Deploy release artifacts into the staging repository
→ Promote staged artifacts into repository 'Releases'
→ Download snapshot and release artifacts from group

→ Download snapshot, release and staged artifacts from
staging group
ºplease comment on this ticket when you promotedº
ºyour first release, thanks                     º

Test settings
- Make sure there are ºno dependencies on snapshotsº in the POMs to be released.
However, the project you want to stage must be a SNAPSHOT version
- Check that your POMs will not lose content when they are rewritten
during the release process:
- Verify that all pom.xml files have an SCM definition
- Do a dryRun release: Oºmvn release:prepare -DdryRun=trueº
Postcript: You may also wish to pass Oº-DautoVersionSubmodules=trueº
as this will save you time if your project is multi-moduled.
- Diff the original file pom.xml with the one called pom.xml.tag to
see if the license or any other info has been removed. This has been known
to happen if the starting ˂project˃ tag is ºNOTº on a single
line. The only things that should be different between these files are the
˂version˃ and ˂scm˃ elements. Any other changes you must
backport yourself to the original pom.xml file and commit before
proceeding with the release.

Deploy snapshot
mvn deploy
[INFO] [deploy:deploy]
[INFO] Retrieving previous build number from apache.snapshots.https

Prepare release
mvn release:clean
mvn release:prepare # creates new tag in SVN, automatically checking in.

Stage release for a vote
mvn release:perform
# release will be automatically inserted in a temp staging dir

pom utils

Utilities to clean, organize, and restructure Maven POMs.

BºPOM Cleanerº:
  "Cleans up" a single POM, normalizing plugin and dependency specifications,
  converting hardcoded versions to properties, consitently ordering top-level
  elements, and pretty-printing the output.
  There is also a version of this tool that runs as a web-app.

BºVersion Updater:º
  Updates the version for a set of POMs, either to a specified version or the
  next sequential version.

BºDependency Check:º
  Examines a project to find dependencies that are specified but unused, and
  those that are used but unspecified (ie, transitive dependencies that
  should be direct).

MVND(aemon) https://www.infoq.com/news/2020/12/mvnd-mavens-speed-daemon/ - Study driven by Gradle shows Maven as being Rºup to 100 times slower than gradle buildsº. - JIT compiled classes are cached. - multi process if needed. - pretty small: ~4060 lines of Java code. - mvnd speed gains: - 1/2 modules: ~ x7/x10 faster - big projects: ~ x6 faster (ex:Camel Quarkus 1242 modules) - Who-is-Who: - Guillaume Nodet (project creator) - Peter Palaga:main contributor
  Dockerfile "pipeline": mvn to Container
FROM maven:3.6-jdk-12-alpine as build
WORKDIR /builder
ADD pom.xml /builder/pom.xml
ADD src /builder/src

RUN mvn install -DskipTests=true

FROM openjdk:11-jre
ARG APP_NAME_ARG=middleware-0.0.1-SNAPSHOT.jar
COPY --from=build /builder/target/$APP_NAME /app
COPY --from=build /builder/src/main/resources /app/src/main/resources
CMD java -Dspring.profiles.active=$APP_PROFILE -jar $APP_NAME

Jib: Image Builder Docker+mvn/gradle integration: @[https://www.infoq.com/news/2018/08/jib] @[https://github.com/GoogleContainerTools/jib] - Build Java Container without Docker/Dockerfile - Jib's build approach separates the Java application into multiple layers, so when there are any code changes, only those changes are rebuilt, rather than the entire application. - these layers are layered on top of a distroless base image. containing only the developer's application and its runtime deps. - ºDocker build flow º │JAR│ ← (build) ← │Project│ │ │Container │ ├─────→ │Build Context│ →(build)→ │ Container Image │ →(push)→ │Image │ │ │ (docker cache) │ │(registry)│ │Dockerfile│ ºJib Build Flow:º │Container │ │Project│ ───────────────(Jib)─────────────────────────────────────→│Image │ │(registry)│ - Ex: Creating images from command line: Once jib is installed and added to PATH, to create a new image do something like: $º$ /opt/jib/bin/jib º $º --insecure \ º ← allow conn. to HTTP (non TLS) dev.registries $º build \ º ← build image $º --registry \ º ← Push to registry $º gcr.io/distroless/java \ º ← Base image (busybox, nginx, gcr.io/distroless/java,...) $º \ º ← Destination registry / image $º --entrypoint "java,-cp,/app/lib/*,\ º $º com.google.cloud.tools.jib.cli.JibCli" \ º $º build/install/jib/lib,/app/lib º ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Other options include: (See jib build --help for more options) p=perms set file and directory permissions: actual use actual values in file-system fff:ddd octal file and directory (Default to 644 for files and 755 for dir.) ts=timestamp set last-modified timestamps: actual use actual values in file-system "seconds since Unix epoch" "date-time in ISO8601 format" (Default to 1970-01-01 00:00:01 UTC). -a, --arguments=arg container entrypoint's default arguments -c, --creation-time=time Set image creation time º(default: 1970-01-01T00:00:00Z)º -l, --label=key=val[,key=va l...] -p, --port=port[,port...] Expose port/type (ex: 25 or 25/tcp) -u, --user=user Set user for execution (uid or existing user id) -V, --volume=path1,path2... Configure specified paths as volumes - Ex pom.xml to create tomcat container with war: REF: https://stackoverflow.com/questions/63657172/option-to-auto-generate-dockerfile-and-other-deployment-tooling-in-intellij $º$ mvn clean package jib:dockerBuild º $º$ docker run --rm -p 8082:8080 \ º $º registry.localhost/hello-world:latest º ˂?xml version="1.0" encoding="UTF-8"?˃ ˂project xmlns="http://maven.apache.org/POM/4.0.0" "http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"˃ ˂modelVersion˃4.0.0˂/modelVersion˃ ˂groupId˃org.example˂/groupId˃ ˂artifactId˃mvn-jib-example˂/artifactId˃ ˂version˃1.0˂/version˃ ˂packaging˃war˂/packaging˃ ˂properties˃ ˂project.build.sourceEncoding˃UTF-8˂/project.build.sourceEncoding˃ ˂failOnMissingWebXml˃false˂/failOnMissingWebXml˃ ˂/properties˃ ˂dependencies˃ ˂dependency˃ ˂groupId˃javax.servlet˂/groupId˃ ˂artifactId˃javax.servlet-api˂/artifactId˃ ˂version˃4.0.1˂/version˃ ˂scope˃provided˂/scope˃ ˂/dependency˃ ˂/dependencies˃ ˂build˃ ˂finalName˃servlet-hello-world˂/finalName˃ ˂plugins˃ ˂plugin˃ ˂groupId˃org.apache.maven.plugins˂/groupId˃ ˂artifactId˃maven-compiler-plugin˂/artifactId˃ ˂version˃3.8.1˂/version˃ ˂configuration˃ ˂source˃1.8˂/source˃ ˂target˃1.8˂/target˃ ˂/configuration˃ ˂/plugin˃ ˂plugin˃ ˂groupId˃com.google.cloud.tools˂/groupId˃ ˂artifactId˃jib-maven-plugin˂/artifactId˃ ˂version˃2.5.0˂/version˃ ˂configuration˃ ˂allowInsecureRegistries˃true˂/allowInsecureRegistries˃ ˂from˃ ˂image˃tomcat:9.0.36-jdk8-openjdk˂/image˃ ˂/from˃ ˂to˃ ˂image˃registry.localhost/hello-world˂/image˃ ˂auth˃ ˂username˃...˂/username˃ ˂password˃...˂/password˃ ˂/auth˃ ˂tags˃ ˂tag˃latest˂/tag˃ ˂/tags˃ ˂/to˃ ˂container˃ ˂appRoot˃/usr/local/tomcat/webapps/ROOT˂/appRoot˃ ˂/container˃ ˂extraDirectories˃ ˂paths˃ ˂path˃ ˂from˃./src/main/resources/extra-stuff˂/from˃ ˂into˃/path/in/docker/image/extra-stuff˂/into˃ ˂/path˃ ˂path˃ ˂from˃/absolute/path/to/other/stuff˂/from˃ ˂into˃/path/in/docker/image/other-stuff˂/into˃ ˂/path˃ ˂/paths˃ ˂/extraDirectories˃ ˂/configuration˃ ˂/plugin˃ ˂/plugins˃ ˂/build˃ ˂/project˃ See also: jKube [[jkube?]]
REF: @[https://docs.gradle.org/current/userguide/gradle_wrapper.html]
- recommended way to execute any Gradle build
- invokes gradle with a declared version (vs randomnly installed one in OS).
  (robust builds)
- invokes with a declared version of Gradle.

- Workflow:
- set up a new Gradle project
- add Wrapper to new project
  (a gradle runtime must be instaled)
  $ gradle wrapper \
    --gradle-version 5.1 \      optional
    --distribution-type bin  \  optional
    --gradle-distribution-url ...  optional
    --gradle-distribution-sha256-sum  ... optional

The SHA256 hash sum used for verifying the downloaded Gradle distribution.

  → Task :wrapper
  → 1 actionable task: 1 executed
  ├── build.gradle
  ├── settings.gradle
  ├──ºgradle                           º←  generated dir. to be added to git
  │  º└── wrapper                      º
  │  º    ├── gradle-wrapper.jar       º ← code for downloading the distro
  │  º    └── gradle-wrapper.propertiesº
  ├──ºgradlew    º   ← once generated, use it like $ ./gradlew build
  gradle/wrapper/gradle-wrapper.properties  is generated
  with the information about the Gradle distribution:
   - server hosting the Gradle dist. Ex:
   - type of Gradle dist.
     (default to -bin dist with only runtime -no sample code,docs,...)
   - The Gradle version used for executing the build.
     (default to local installed one)

- Check generated wrapper files to "git",
  including the (small) jar files.
- run a project with provided Wrapper
- upgrade the Wrapper to new Gradle version when desired.

- ºCustomizing the wrapperº
  - built-in wrapper task exposes numerous options
  to bend the runtime behavior to your needs.
  build.tasks.wrapper {
    distributionType = Wrapper.DistributionType.ALL

- HTTP Basic Authentication (r*WARN: use only with TLS connections)
  alt 1: ENV.VARS:
  alt 2: gradle/wrapper/gradle-wrapper.properties

- ºVerifying downloadº

Multi-module Deployer
  (Java Example project available in github)
- library built to speed up the deployment of microservice based applications.
- build and run each application module
- configure deployment dependencies between modules
  by just creating and running a simple application.

└ Installation
  1) Add to your build.gradle the following function:
   def downloadLibFromUrl(String libSaveDir, String libName, String libUrl) {
       def folder = new File(libSaveDir)
       if (!folder.exists()) {
       def file = new File("$libSaveDir/$libName")
       if (!file.exists()) {
           ant.get(src: libUrl, dest: file)
       getDependencies().add('compile', fileTree(dir: libSaveDir, include: libName))
  2) the following code to your dependencies declaration:
   dependencies {
       /* ... */
       def libSaveDir = "${System.properties['user.home']}/.gradle/caches/modules-2/files-2.1"
       def version = '1.1.1'
       def libName = "multi-module-deployer-${version}.jar"
       def url = "https://github.com/FlamingTuri/multi-module-deployer/releases/download/v$version/$libName"
       downloadLibFromUrl(libSaveDir, libName, url)

└ Usage example

  import multi.module.deployer.MultiModuleDeployer;
  import multi.module.deployer.moduleconfig.ModuleConfig;
  import multi.module.deployer.moduleconfig.ModuleConfigFactory;
  public class App {
    public static void main(String[] args) {
        MultiModuleDeployer multiModuleDeployer = new MultiModuleDeployer();
        // commands to run the first module
        String   linuxCmd = "linux commands to deploy first module";
        String windowsCmd = "windows commands to deploy first module";
        ModuleConfig firstModuleConfig =
          ModuleConfigFactory.httpModuleConfig(linuxCmd, windowsCmd, 8080, "localhost", "/api/...");
        // adds the first configuration to the deployment list
        // commands to run the second module
        linuxCmd = "linux commands to deploy second module";
        windowsCmd = "windows commands to deploy second module";
        ModuleConfig secondModuleConfig = ModuleConfigFactory.httpModuleConfig(linuxCmd, windowsCmd, 3000, "localhost", "/api/...");
        // adds the second configuration to the deployment list
        // it will be started only after the first one is "ended"
        // deploys the modules
What's New
- Gradle v6: 
 SQL made simple

- vertx-jooq
  jOOQ-CodeGenerator to create vertxified DAOs and POJOs.
  Now with JDBC, async and reactive support!

- Hibernate Gotchas:
  hibernate, joins, and max results: a match made in hell

- Common Hibernate Exceptions Every Developer Must Know
(by Goldman Sachs)
enterprise grade (ORM) object-relational mapping framework for Java with the following enterprise features:

- Strongly typed Bºcompile-timeº checked query language
- Bi-temporal chaining
- Transparent multi-schema support
- Full support for unit-testable code

c3p0 is an easy-to-use library for making traditional JDBC drivers
"enterprise-ready" by augmenting them with functionality defined by the jdbc3
spec and the optional extensions to jdbc2. As of version 0.9.5, c3p0 fully
supports the jdbc4 spec.

In particular, c3p0 provides several useful services:

- A class whichs adapt traditional DriverManager-based JDBC drivers to the
  newer javax.sql.DataSource scheme for acquiring database Connections.
- Transparent pooling of Connection and PreparedStatements behind DataSources
  which can "wrap" around traditional drivers or arbitrary unpooled DataSources.

The library tries hard to get the details right:
- c3p0 DataSources are both Referenceable and Serializable, and are thus
  suitable for binding to a wide-variety of JNDI-based naming services.
- Statement and ResultSets are carefully cleaned up when pooled Connections
  and Statements are checked in, to prevent resource- exhaustion when clients use
  the lazy but common resource-management strategy of only cleaning up their
  Connections. (Don't be naughty.)
- The library adopts the approach defined by the JDBC 2 and 3 specification
  (even where these conflict with the library author's preferences). DataSources
  are written in the JavaBean style, offering all the required and most of the
  optional properties (as well as some non-standard ones), and no-arg
  constructors. All JDBC-defined internal interfaces are implemented
  (ConnectionPoolDataSource, PooledConnection, ConnectionEvent-generating
  Connections, etc.) You can mix c3p0 classes with compliant third-party
  implementations (although not all c3p0 features will work with external
  implementations of ConnectionPoolDataSource).
Snappy Fast de/compressor
- Java port of the snappy http://code.google.com/p/snappy/
- Map-Like API optimized for caching.
- 1.0 drawbacks:
  - No async operations.
- Implemented by Hazelcast and others
- TODO: Patterns of JSON Matching:
  Streaming based, binding based, expression based.
- REF:JSON processing public review

Public review of JSR 374:
- Java API for JSON Processing (JSON-P) version 1.1 is now open.
  This version is expected to be included in the release of J2EE 8 
  and keeps JSON-P current with JSON IETF standards. It includes 
  support for:
    - JSON Pointer
    - JSON Patch
    - JSON Merge Patch
    - Query and transformation operations
    - Java 8 streams and lambdas
- JSON-P was introduced in 2013 with the release of J2EE 7, as
  an alternative to Gson and Jackson. It was designed to parse, generate,
  and query standard JSON documents.
  SR-367: Java API for JSON Binding (JSON-B), will also be included in the 
  release of J2EE 8.

- Example:
  package com.mycomp.project1;
  import java.io.BufferedReader;
  import java.io.DataOutputStream;
  import java.io.IOException;
  import java.io.InputStreamReader;
  import java.net.HttpURLConnection;
  import java.net.URL;
  import java.security.KeyManagementException;
  import java.security.NoSuchAlgorithmException;
  import javax.net.ssl.HttpsURLConnection;
  import javax.net.ssl.SSLContext;
  import javax.net.ssl.SSLSocketFactory;
  import javax.net.ssl.HostnameVerifier;
  import javax.net.ssl.SSLSession;
  import javax.net.ssl.TrustManager;
  import javax.net.ssl.X509TrustManager;
  import org.json.JSONObject;
  import java.security.cert.X509Certificate;
  import java.util.Date;
  import java.util.Scanner;
  public class TestAPI˂JSONArray˃ {
      static String userpass = "operator1:ecllqy";
      private static SSLSocketFactory sslSocketFactory = null;
      private JSONObject sendPost(String url, String post_body, String token) throws Exception
          URL obj = new URL(url);
          String basicAuth = "Basic " +
          HttpsURLConnection con = (HttpsURLConnection) obj.openConnection();
          setAcceptAllVerifier((HttpsURLConnection)con); // TODO: WARN Add certificate validation.
          con.setRequestMethod("POST"); //add request header
          con.setRequestProperty("Content-Type", "application/json");
          con.setRequestProperty("Cache-Control", "no-cache");
          if (token.isEmpty()) { con.setRequestProperty("Authorization", basicAuth);
          } else               { con.setRequestProperty("Authorization", "Bearer "+token);
          DataOutputStream wr = new DataOutputStream(con.getOutputStream());
          int responseCode = con.getResponseCode();
          BufferedReader in = new BufferedReader(new InputStreamReader(con.getInputStream()));
          StringBuffer response = new StringBuffer();
          String inputLine; while ((inputLine = in.readLine()) != null) { response.append(inputLine); }
          return new JSONObject(response.toString()); //String myJSONStr
       * Overrides the SSL TrustManager and HostnameVerifier to allow
       * all certs and hostnames.
       * WARNING: This should only be used for testing, or in a "safe" (i.e. firewalled)
       * environment.
       * @throws NoSuchAlgorithmException
       * @throws KeyManagementException
      protected static void setAcceptAllVerifier(HttpsURLConnection connection) throws NoSuchAlgorithmException, KeyManagementException {
          // Create the socket factory.
          // Reusing the same socket factory allows sockets to be
          // reused, supporting persistent connections.
          if( null == sslSocketFactory) {
              SSLContext sc = SSLContext.getInstance("SSL");
              sc.init(null, ALL_TRUSTING_TRUST_MANAGER, new java.security.SecureRandom());
              sslSocketFactory = sc.getSocketFactory();
          // Since we may be using a cert with a different name, we need to ignore
          // the hostname as well.
      private static final TrustManager[] ALL_TRUSTING_TRUST_MANAGER = new TrustManager[] {
          new X509TrustManager() {
              public X509Certificate[] getAcceptedIssuers() {
                  return null;
              public void checkClientTrusted(X509Certificate[] certs, String authType) {}
              public void checkServerTrusted(X509Certificate[] certs, String authType) {}
      private static final HostnameVerifier ALL_TRUSTING_HOSTNAME_VERIFIER  = new HostnameVerifier() {
          public boolean verify(String hostname, SSLSession session) {
              return true;
RESTAssured: REST API testing
ºFULL JOURNEY == Simulate full (REST) API in expected orderº
└ Pre-Setup:

└ Ussage Example:
  package com.mycompany.myproject.mymodule;
  import static junit.framework.TestCase.assertTrue;
  import static org.hamcrest.Matchers.*;
  import static io.restassured.RestAssured.given;
  import io.restassured.RestAssured;
  import io.restassured.config.HttpClientConfig;
  import io.restassured.path.json.JsonPath;
  import io.restassured.response.Response;
  import io.restassured.specification.RequestSpecification;
  import junit.framework.TestCase;
  import org.apache.http.client.HttpClient;
  import org.apache.http.impl.client.SystemDefaultHttpClient;
  import org.apache.http.params.HttpConnectionParams;
  import org.apache.http.params.HttpParams;
  import org.eclipse.jetty.http.HttpStatus;
  import org.junit.Ignore;
  import org.junit.Test;
  import org.junit.BeforeClass;
  import org.hamcrest.BaseMatcher;
  import org.hamcrest.Description;
  import java.util.Base64;
  import java.util.Map;
  public class FullJourneyTest {
      // Custom regex matcher for RestAssured Framework
      public static classBºRegexMatcherºextendsºBaseMatcher˂Object˃º{
          private final String regex;
          public BºRegexMatcherº(String regex){ this.regex = regex; }
         º@Overrideºpublic booleanºmatchesº(Object o){ return ((String)o).matches(regex); }
         º@Overrideºpublic voidºdescribeToº(Description description){
              description.appendText("matches regex=");
          public staticBºRegexMatcherº matches(String regex){ return newBºRegexMatcherº(regex); }
       public static classGºBase64MatcherºextendsºBaseMatcher˂Object˃º{
          public Base64Matcher(){}
         º@Overrideºpublic booleanºmatchesº(Object o){
              try {
                  return true;
              }catch (Exception e){
                  return false;
         º@Overrideºpublic voidºdescribeToº(Description description){
              description.appendText("can be parsed as Base64");
          public static Base64Matcher isBase64Encoded(){
              return new Base64Matcher();
      private static final String AUTH_HEADER_VALUE = "Bearer " + ServerConfig.apiKey;
      protected static RequestSpecification setupCommonHeaders() {
          return given().header("Authorization", AUTH_HEADER_VALUE)
                        .header("Accept"       , "application/json")
                        .header("content-type" , "application/json;charset=utf-8")
      final String
          NAME="COMMUNITY_1", SYMBOL="SY1";
      Response response;
      public static void setup() {
          RestAssured.port     = ServerConfig.serverPort;
          RestAssured.basePath = "/";
          RestAssured.baseURI  = "http://localhost";
          HttpClientConfig clientConfig = RestAssured.config().getHttpClientConfig();
          clientConfig = clientConfig.httpClientFactory(new HttpClientConfig.HttpClientFactory() {
              public HttpClient createHttpClient() {
                  HttpClient rv =  new SystemDefaultHttpClient();
                  HttpParams httpParams = rv.getParams();
                  //  Wait 5s max for a connection
                  HttpConnectionParams.setConnectionTimeout(httpParams, 5 * 1000);
                  // Default session is 60s
                  HttpConnectionParams.setSoTimeout(httpParams, 60 * 1000);
                  return rv;
          // This is necessary to ensure, that the client is reused.
          clientConfig = clientConfig.reuseHttpClientInstance();
          RestAssured.config = RestAssured.config().httpClient(clientConfig);
      public void A010_PutNewCommunityAndNewUserForPendingToMineCommunity() {
          String jsonBody =
              "{ " +
                  " \"name\": \""+NAME+"\", " +
                  " \"symbol\": \"" + SYMBOL + "\","
          response = setupCommonHeaders().body(jsonBody).when().ºpost("/Route/To/REST/API/01")º;
              /* ºget sure JSON serializer do not include extra (maybe sensitive) infOº */
              .body("size()", Oºis(5)                          º)
              .body("id"    , Oºnot(isEmptyString())           º)
              .body("pubkey", Oºnot(isEmptyString())           º)
              .body("pubkey", BºRegexMatcherºOº.matches("^{65}$")º)
              .body("pubkey", OºBase64Matcher.isBase64Encoded()º)
              .body("name"  , OºequalTo(NAME)                  º)
              .body("symbol", OºequalTo(SYMBOL)                º)
          String BºNEW_ID = response.getBody().jsonPath().get("id")º;
          // Next related test to execute synchronously after fetching NEW_ID
          String jsonBody =
              "{ " +
                  B*" \"FK_ID\": \""+NEW_ID+"\", " +*
          response = setupCommonHeaders().body(jsonBody).when().ºpost("/Route/To/REST/API/02")º;
BDD Serenity Testing

- Serenity BDD is an open source library that aims to make the idea of living
  documentation a reality.

- write cleaner and more maintainable automated acceptance and 
  regression tests faster. Serenity also uses the test results to 
  produce illustrated, narrative reports that document and describe 
  what your application does and how it works. Serenity tells you not 
  only what tests have been executed, but more importantly, what 
  requirements have been tested.

- One key advantage of using Serenity BDD is that you do not have to invest time
  in building and maintaining your own automation framework.

- Serenity BDD provides strong support for different types of automated acceptance testing, including:
  - Rich built-in support for web testing with Selenium.
  - REST API testing with RestAssured.
  - Highly readable, maintainable and scalable automated testing with the
    Screenplay pattern.

- The aim of Serenity is to make it easy to quickly write well-structured,
  maintainable automated acceptance criteria, using your favourite BDD or
  conventional testing library. You can work with Behaviour-Driven-Development
  tools like Cucumber or JBehave, or simply use JUnit. You can integrate with
  requirements stored in an external source (such as JIRA or any other test cases
  management tool), or just use a simple directory-based approach to organise
  your requirements.
- framework for Behaviour-Driven Development (BDD). BDD is an 
  evolution of test-driven development (TDD) and acceptance-test driven 
  design, and is intended to make these practices more accessible and 
  intuitive to newcomers and experts alike. It shifts the vocabulary 
  from being test-based to behaviour-based, and positions itself as a 
  design philosophy.

1 Write story
Scenario: A trader is alerted of status
Given a stock and a threshold of 15.0
When stock is traded at 5.0
Then the alert status should be OFF
When stock is traded at 16.0
Then the alert status should be ON

2 Map to java

3 Configure Stories

4 Run Stories
GraalVM Summary

- Graal: How to Use the New JVM JIT Compiler in Real Life

- GraalVM Native Image
º"native-image"º utility:
 - ahead-of-time compiler to a Bºstandalone executableº.
 - JVM is replaced with necesary  components (memory mngr,
   Thread scheduler) in "Substrate VM" runtime:
   Substrate VM runtime is actually the name for the runtime components
   (like the deoptimizer, garbage collector, thread scheduling etc.).
 - Result has faster startup time and lower runtime memory .
 - It statically analyses which classes and methods are reachable
   and used during application execution and passes all this
   reachable code as the input to the GraalVM compiler for
   ahead-of-time compilation into native-library.
Ex Ussage:
  # tested with graalvm 19.3.1
  ./gradlew spotlessApply
  ./gradlew build
  ./gradlew shadowJar  // ← create fat JARs, relocate packages for apps/libs
  cd "build/libs" || exit
  native-image \
     -cp svm-1.0-SNAPSHOT-all.jar \
     org.web3j.svm.MainKt \
     --no-fallback \
     --enable-https \

Extracted from "Hibernate with Panache" by Emmanuel Bernard.
""" Quarkus is Supersonic Subatomic Java. extremely fast with low memory footprint""".
Hibernate ORM is the de facto JPA implementation and offers you the full
breadth of an Object Relational Mapper. It makes complex mappings possible,
but it does not make simple and common mappings trivial. Hibernate ORM with
Panache focuses on making your entities trivial and fun to write in Quarkus.
Tech Radar
LMAX Disruptor: High Perf Inter-Thread Messaging Library

See also:

LMAX Exchange Getting Up To 50% Improvement in Latency From Azul's Zing JVM
Interesting points about GC tunning.
Java Value Types proposal
            │ "NOW" (2019─12)            │ "FUTURE"
            │ + primitive types          │ + primitive types
            │ + object references:       │ + object references:
            │ ─ NO low─level mem.control │ + Value types
            │   (deliberately)           │
PROS:       │ ─ Great simplicity         │ ─ Memory layout efficiency
            │                            │ ─ removes the need for
            │                            │   a full object header
            │                            │   for each item of composite data.
            │                            │ ─ header removal
            │                            │   =→ instance metadata removal
CONS:       │ performance penalties      │ ─ Higher complexity
            │ ─ indirections in arrays   │ ─ object's identity is lost
            │ ─ cache misses.            │ ─ new bytecodes needs to be introduced
Tribe: reliable multicast
REF: @[http://tribe.ow2.org/]
- Unlike JGroups, Tribe only targets reliable multicast
 (no probabilistic delivery) and is optimized for cluster
JGroups multicast
- toolkit for reliable multicast communication.
- point-to-point FIFO communication channels (basically TCP)
- Targets high performance cluster environments.

- Unlike JGroups, Tribe only targets reliable multicast
  (no probabilistic delivery) and is optimized for
  cluster communications.
Apache MINA:Netty Alt.
Apache MINA vs Netty: https://www.youtube.com/watch?v=A2pWsxPWJuc

Apache MINA is a network application framework which helps users develop high
performance and high scalability network applications easily. It provides an
abstract event-driven asynchronous API over various transports such as TCP/IP
and UDP/IP via Java NIO.

Apache MINA is often called:
- NIO framework library
- client server framework library, or
- a networking socket library

Apache MINA comes with many subprojects :
- Asyncweb : An HTTP server build on top of MINA asynchronous framework
- FtpServer : A FTP server
- SSHd : A Java library supporting the SSH protocol
- Vysper : An XMPP server
- @[http://osv.io/]
  versatile modular unikernel designed to run unmodified Linux
  applications securely on micro-VMs in the cloud. Built from the ground up for
  effortless deployment and management of micro-services and serverless apps,
  with superior performance. (Includes CRaSH shell)
- Building the future of event-driven architectures.
- Open source tools to easily build and maintain your event-driven architecture.
- All powered by the AsyncAPI specification, the industry standard for defining
  asynchronous APIs.

Google Guava

VisibleForTesting[qa] REF: @[https://stackoverflow.com/questions/6913325/annotation-to-make-a-private-method-public-only-for-test-classes] @[https://guava.dev/releases/19.0/api/docs/com/google/common/annotations/VisibleForTesting.html] Documentation
JS transpilers
- JSweet.org: Java to Javascript transpiler:

  TeaVM is an ahead-of-time compiler for Java bytecode that emits JavaScript
  and WebAssembly that runs in a browser. Its close relative is the well-known
  GWT. The main difference is that TeaVM does not require source code, only
  compiled class files. Moreover, the source code is not required to be Java,
  so TeaVM successfully compiles Kotlin and Scala.

- @[https://www.infoq.com/news/2019/05/j2cl-java-javascript-transpiler/]
""Main JVM bytecode to JavaScript compilers are TeaVM,[20] the compiler
contained in Dragome Web SDK,[21] Bck2Brwsr,[22] and j2js-compiler.[23]"""
- See also:
  Bounce Castle FIPS JCA provider
Javalin: Kiss Kotlin/Java web framework
- Inspired by Javascript KOA.js framework

- Ex: Declare server and API in the same place
  | import io.javalin.ApiBuilder.*;
  | import io.javalin.Javalin;
  | Javalin app = Javalin.create(config -˃ {
  |     config.defaultContentType = "application/json";
  |     config.addStaticFiles("/public");
  |     config.enableCorsForAllOrigins();
  | }).routes(() -˃ {
  |     path("users", () -˃ {
  |         get(UserController::getAll);
  |         post(UserController::create);
  |         path(":user-id", () -˃ {
  |             get(UserController::getOne);
  |             patch(UserController::update);
  |             delete(UserController::delete);
  |         });
  |         ws("events", userController::webSocketEvents);
  |     });
  | }).start(port);
JNR(JNI/UNIX friendly)
( used by Netty and others...)
   load native libraries without writing JNI code by hand, or using tools such as SWIG.
   jnr-unixsocket: UNIX domain sockets (AF_UNIX) for Java
   Java Native Runtime Enhanced X-platform I/O
   Pure java x86 and x86_64 assembler
  AArch64 assembler for the Java Native Runtime
  A ProcessBuilder look-alike based entirely on native POSIX APIs
 Java Platform Module System (JPMS) (1.9+) 
- JSR 379: JAVA SE 9
By Paul Deitel

- higher level of aggregation above packages.
-ºuniquely named, reusable group of related packages and resources.º

- module descriptor: (compiled version of module-info.java )
  /module-info.class  ( @ module root's folder)
  - name
  - dependencies (modules)
  - packages explicitly marked as available to other modules 
    (by default  implicitly unavailable / strong encapsulation)
  - services offered
  - services consumed
  - module list allowed reflection

- Rules:
  - Each module must explicitly state its dependencies.
  - provides explicit mechanism to  declare dependencies between
    modules in a manner that’s recognized both at Bºcompile timeº
    and Bºexecution timeº.

- The java platform is now modularized into ~ 95 modules
$º$ java --list-modulesº ←  List modules in SE, JDK, Oracle, ...
  ( custom runtimes can be created )

BºModule Declarationsº
  cat module-info.java:
  module java.desktop { ← body can be empty 
     requires modulename; ← 'static' flag: required just at compile time.
     requires transitive java.xml; ← if a java.desktop method returns a type
                                     from the java.xml module, code using
                                     (reading) java.desktop become dependent
                                     on java.xml. Without 'transitive' compilation
                                     will fail.
     exports ...    ← declares module’s packages whose public types 
                      (and their nested public and protected types) 
                      are accessible to code in all other modules. 
     exports to ... ← fine grained export
     uses           ← specifies a service used by this module
                      (making our module a service consumer).
                      → modules implements/extends the interface/abstract class

     provides...with ← specifies that a module provides a service implementation

     open 'package'  ← Specifies object introspection scope 
     opens ... to

Java Erasure
- Type Erasure is a technique employed the Java compiler to support the use of Generics.
- Sort of Plugable Authentication similar to the UNIX PAM.
Eclipse Microprofile

- launched at JavaOne 2016 to address the shortcomings in the Enterprise Java microservices space.

- MicroProfile specifies a collection of Java EE APIs and technologies which together
  form a core baseline microservice that aims to deliver application portability across multiple runtimes.

- MicroProfile 1.0 spec includes a subset of the 30+ Java Enterprise specifications:
  - JAX-RS 2.0 for RESTful endpoints
  - CDI 1.1 for extensions and dependency injection
  - JSON-P 1.0 for processing JSON messages.

- MicroProfile 1.2  (September 2017) include:
  - Configuration 1.1
  - Fault Tolerance
  - JWT
  - Metrics
  - Health Check

- MicroProfile 2.0 (Future). It is expected it will align all APIs to Java EE 8.

- vendors runtime support:
  - WebSphere Liberty IBM
  - TomEE from Tomitribe
  - Payara
  - RedHat's WildFly Swarm
  - KumuluzEE.

- Community support:
  - London Java Community
  - SOUJava
  - ...

- key code sample consists of four microservices and a front-end application.
  Vendor            |     JAR |      StartUp
                    | size/Mb | Time in Secs
  WebSphere Liberty |   35    |            7
  WildFly Swarm     |   65    |            6
  Payara            |   33    |            5
  TomEE             |   35    |            3
  KumuluzEE*        |   11    |            2

- CDI-Centric Programming Model
  - Context and Dependency Injection specification
  - Two of its most powerful features are interceptors and observers.
    - Interceptors perform cross-cutting tasks that are orthogonal to business logic
      such as auditing, logging, and security
    - The baked-in event notification model implements the observer
      pattern to provide a powerful and lightweight event notification system
      that can be leveraged system-wide.
Concurrency Classes Video
- Library/API for generating .java source files.
- Useful for things like:
  - annotation processing
  - interacting with metadata files (e.g., database schemas, protocol formats).
  - Transpiler (language A → Java Src ).
  Bºkeeping a single source of truth for the metadataº.
Avian Embedded JVM
https://readytalk.github.io/avian/  ("Embedded java")
- lightweight JVM+class library designed to provide a useful subset 
  of Java’s features, suitable for building self-contained 

From Mike's blog:
""" Avian is a lightweight virtual machine and class library designed 
    to provide a useful subset of Java’s features, suitable for 
    building self-contained applications.
     So says the website. They aren’t joking. The example app demos use 
    of the native UI toolkit on Windows, MacOS X or Linux. It’s not a 
    trivial Hello World app at all, yet it’s a standalone 
    self-contained binary that clocks in at only one megabyte. In 
    contrast, "Hello World" in Go generates a binary that is 1.1mb in 
    size, despite doing much less.
     Avian can get these tiny sizes because it’s fully focused on doing 
    so: it implements optimisations and features the standard HotSpot JVM 
    lacks, like the use of LZMA compression and ProGuard to strip the 
    standard libraries. Yet it still provides a garbage collector and a 
    JIT compiler. """

  Experimental Reactive Relational Database Connectivity Driver, R2DBC, Announced at SpringOne

    .flatMapMany ( conn -˃
       conn.createStatement ( "SELECT value FROM test" )
            .flatMap (result -˃ 
              result.map(( row, metadata -→ row.get("value"))))
- Non-Java foreign-function and data interfaces, including 
  native function calling from JVM (C, C++), and native data access 
  from JVM or inside JVM heap
- Nailgun is a client, protocol, and server for running Java programs
  from the command line without incurring the JVM startup overhead.

- Programs run in the server (which is implemented in Java), and are
  triggered by the client (written in C), which handles all I/O.

- static Java source and byte code analyzer that detects locking and 
threading issues, performance and scalability problems, and checks 
complex contracts such as Java serialization by performing type, data 
flow, and lock graph analysis.
9 Profiling tools
jLine: GNU/readline alike library for JAVA:
- Builtin support for console variables, scripts, custom pipes, widgets and object printing.
- Autosuggestions
- Language REPL Support

PicoCli @[https://picocli.info/] Picocli is a one-file framework for creating Java command line applications with almost zero code. It supports a variety of command line syntax styles including POSIX, GNU, MS-DOS and more. It generates highly customizable usage help messages that use ANSI colors and styles to contrast important elements and reduce the cognitive load on the user.
A Year with Java 11 in Production!
Andrzej Grzesik talks about Revolut’s experience in running Java 11 in production for over a year. He talks about the doubts they had, some pain points and gains, as well as surprises that surprised them. He discusses tools, alternative JVM languages, and some 3rd party products.
Java Poet

Java API for generating .java source files.

- useful for:
  - transpiling: Custom language to java.
  - annotation processing
  - interacting with metadata files (database schemas, protocol formats,...).

  Avoid boilerplate while also keeping a ºsingle source of truthº.
Aviam:Light Weight JVM
https://readytalk.github.io/avian/  ("Embedded java")
Avian is a lightweight virtual machine and class library designed to 
provide a useful subset of Java’s features, suitable for building 
self-contained applications.

From Mike's blog: https://blog.plan99.net/kotlin-native-310ffac94af2
  | Enter Avian
  |     “Avian is a lightweight virtual machine and class library 
  |    designed to provide a useful subset of Java’s features, suitable 
  |    for building self-contained applications.”
  | So says the website. They aren’t joking. The example app demos 
  | use of the native UI toolkit on Windows, MacOS X or Linux. It’s not 
  | a trivial Hello World app at all, yet it’s a standalone 
  | self-contained binary that clocks in at only one megabyte. In 
  | contrast, “Hello World” in Go generates a binary that is 1.1mb in 
  | size, despite doing much less.
  | Avian can get these tiny sizes because it’s fully focused on 
  | doing so: it implements optimisations and features the standard 
  | HotSpot JVM lacks, like the use of LZMA compression and ProGuard to 
  | strip the standard libraries. Yet it still provides a garbage 
  | collector and a JIT compiler.
  Experimental Reactive Relational Database Connectivity Driver, R2DBC, Announced at SpringOne
- Nailgun is a client, protocol, and server for running Java programs
  from the command line without incurring the JVM startup overhead.

 command line tool to help you forget how to set the JAVA_HOME environment variable:

 $ jenv add /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
   oracle64- added
 $ jenv add /Library/Java/JavaVirtualMachines/jdk17011.jdk/Contents/Home
   oracle64- added

  List managed JDKs

  $ jenv versions
    * oracle64- (set by /Users/hikage/.jenv/version)

  $ jenv global oracle64- Configure global version
  $ jenv local oracle64- Configure local version (per directory)
  $ jenv shell oracle64- Configure shell instance version
SmallRye Mutiny

SmallRye Mutiny is a reactive programming library. Wait? Another one?  Yes!

Mutiny is designed after having experienced many issues with other 
Reactive programming libraries and having seen many developers lost 
in an endless sequence of flatMap. Mutiny takes a different approach. 
First, Mutiny does not provide as many operators as the other famous 
libraries, focusing instead on the most used operators. Furthermore, 
Mutiny provides a more guided API, which avoids having classes with 
hundreds of methods that cause trouble for even the smartest IDE. 
Finally, Mutiny has built-in converters from and to other reactive 
programing libraries, so you can always pivot.
Obevo: ddbb deployment tool handling enterprise scale schemas and complexity.

""" Deploying tables for a new application?
  Or looking to improve the DB Deployment of a years-old system with 
  hundreds (or thousands) of tables, views, stored procedures, and 
  other objects?

  Obevo has your use case covered.

  Supported platforms: DB2, H2, HSQLDB, Microsoft SQL Server, MongoDB, 
  Oracle, PostgreSQL, Redshift (from Amazon), Sybase ASE, Sybase IQ
Inmmutable Objects are faster
- One Framework to rule them all by Norman Maurer
SwarmCache is a simple but effective distributed cache. It uses IP 
multicast to efficiently communicate with any number of hosts on a 
LAN. It is specifically designed for use by clustered, 
database-driven web applications. Such applications typically have 
many more read operations than write operations, which allows 
SwarmCache to deliver the greatest performance gains. SwarmCache uses 
JavaGroups internally to manage the membership and communications of 
its distributed cache.

Wrappers have been written that allow SwarmCache to be used with the 
Hibernate and JPOX persistence engines.
Strong Typing in Java
 Strong Typing in Java: a religious argument
bytes java
- utility library that makes it easy to create, parse, transform, 
  validate and convert byte arrays in Java
String.valueOf(Object) vs....
Commom Memory Leaks pitfalls
- Ref: http://java.jiderhamn.se/2012/02/26/classloader-leaks-v-common-mistakes-and-known-offenders/
  Logging frameworks such as Apache Commons Logging (ACL) – 
  formerly Jakarta Commons Logging (JCL) – log4j and 
  java.util.logging (JUL) will cause classloader leaks under some 

  Apache Commons Logging will cause trouble if the logging framework 
  is supplied outside of the web application, such as within the 
  Application Server. In such a case, you need to add a bit of cleanup 
  code to the ServletContextListener we’ve talked about:




  There is an article about this on the Apache Commons Wiki. It is also mentioned in the guide and FAQ.
3 NIO ways to read files 
- read small file using ByteBuffer and RandomAccessFile
- FileChannel and ByteBuffer to read large files
- Example 3: Reading a file using memory-mapped files in Java
- JUnit rule for comparing tables and Spark module for comparing large data sets
- You can use the jlink tool to assemble and optimize a set of modules 
  and their dependencies into a custom runtime image

JSR-330: Provider˂MyTargetBean˃

FROM https://github.com/google/guice/wiki/JSR330
- JSR-330 standardizes annotations like @Inject and the Provider 
  interfaces for Java platforms.
- It doesn't currently specify how applications are configured, so it 
  has no analog to Guice's modules.
High Perf Persistence
Running *.java (11)
PKCS#11 Ref.guide
Loading properties
NOTE: Probably is better to use ENV.VARs to simplify compatibility
      with container deployments.
Config properties files located in .../src/main/resources/db_config.properties

InputStream is = getClass().getResurceAsStream("/db_config.properties");
Properties props = new Properties();

└ How to add comments to properties file:
XML Stream parsing
CGLIB library
- CGLIB library: Used for bytecode generation/method injection (Used by 
  Spring Framework for example)
Debugger Architecture
Reactive Spring with Vert.x
Reactive Spring Boot programming with Vert.x
The latest bundle of Red Hat supported Spring Boot starters was recently
released. In addition to supporting the popular Red Hat products for our Spring
Boot customers, the Red Hat Spring Boot team was also busy creating new ones.
The most recent technical preview added is a group of Eclipse Vert.x Spring
Boot starters, which provide a Spring-native vocabulary for the popular JVM
reactive toolkit.
5 Not So Obvious Things About RxJava
- https://medium.com/@jagsaund/5-not-so-obvious-things-about-rxjava-c388bd19efbc

- Error control

- Dealing with RxJava's never-ending Observables
Example JVM config.
Server version:        Apache Tomcat/8.x
Server built:          unknown
Server number:         8.0.x
OS Name:               Linux
OS Version:            3.10.0-1062.9.1.el7.x86_64
Architecture:          amd64
Java Home:             /ec/local/appserver/u000/app/java/jdk1.8.0_121-strong/jre
JVM Version:           1.8.0_121-b13
JVM Vendor:            Oracle Corporation
Command line argument: -Djava.util.logging.config.file=.../logging.properties
Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
Command line argument: -Xms1536m
Command line argument: -Xmx1536m
Command line argument: -XX:MetaspaceSize=512m
Command line argument: -XX:MaxMetaspaceSize=512m
Command line argument: -XX:MaxDirectMemorySize=1G
Command line argument: -XX:+UseParallelGC
Command line argument: -XX:ParallelGCThreads=4
Command line argument: -XX:+UseParallelOldGC
Command line argument: -XX:LargePageSizeInBytes=4m
Command line argument: -XX:-BindGCTaskThreadsToCPUs
Command line argument: -Djava.awt.headless=true
Command line argument: -Dsun.net.inetaddr.ttl=60
Command line argument: -Dsun.net.inetaddr.negative.ttl=5
Command line argument: -Djava.io.tmpdir=/_tmp
Command line argument: -Dfile.encoding=UTF-8
Command line argument: -XX:ErrorFile=./logs/fatal_error/hs_err_pid%p.log
Command line argument: -Dcom.sun.management.jmxremote=true
Command line argument: -Dcom.sun.management.jmxremote.login.config=Tomcat
Command line argument: -Djava.security.auth.login.config=..../login.config
Command line argument: -Dcom.sun.management.jmxremote.access.file=.../jmxremote.access
Command line argument: -Dcom.sun.management.jmxremote.ssl=false
Command line argument: -Djava.rmi.server.hostname=tcsn0201.cc.cec.eu.int
Command line argument: -Dcom.sun.management.jmxremote.ssl.need.client.auth=false
Command line argument: -verbose:gc
Command line argument: -Xloggc:....API_TEST-gc.log
Command line argument: -XX:+PrintGCDetails
Command line argument: -XX:+PrintGCTimeStamps
Command line argument: -XX:+PrintTenuringDistribution
Command line argument: -XX:+PrintGCApplicationConcurrentTime
Command line argument: -XX:+PrintGCApplicationStoppedTime
Command line argument: -Djava.endorsed.dirs=.../tomcat8/endorsed
Command line argument: -Dcatalina.base=...
Command line argument: -Dcatalina.home=...
Command line argument: -Djava.io.tmpdir=.../temp
mvn: Default Wrapper in 3.7
TLS enh. Backported to 1.8
Java on VC.Studio
Vert.x 3.9 Fluent API Query
Red Hat build of Eclipse Vert.x 3.9 brings Fluent API Query
You use the jhsdb tool to attach to a Java process or to a core dump 
from a crashed Java Virtual Machine (JVM).

jhsdb is a Serviceability Agent (SA) tool. Serviceability Agent (SA) 
is a JDK component used to provide snapshot debugging, performance 
analysis and to get an in-depth understanding of the Hotspot JVM and 
the Java application executed by the Hotspot JVM.

Even though native debuggers like gdb are available for examining the 
JVM, unlike jhsdb, these native debuggers do not have an inbuilt 
understanding of the data structures in Hotspot and hence, are not 
able to throw insights on the Java application being executed. jhsdb 
knows about the locations and address ranges of crucial components of 
the JVM like the Java heap, heap generations, regions, code cache, 
JVM Troubleshooting and Monitoring
- Flight Recoder:

- Built-in tools in JDK:           - Docker commands:
  - jstat                            - stats
  - jcmd                             - inspect
  - jmap (Not recomended)            - top
  - jhat ...                       
                                   - Container aware tools
- Expose JMX port                    - ctop
  - VisualVM                         - dstat
  - jConsole                       
                                   - CAdvisor
- Micrometer                       - Prometheus
- Others: New Relic, Stackiy,      - Docker EE, Datadog, Sysdig,...
  AppDynamics, Dynatrace, ...
GraalVM Native Image

| FROM oracle/graalvm-ce:20.0.0-java11 as builder
| WORKDIR /app
| COPY . /app
| RUN gu install native-image
| # Build the app (via Maven, Gradle, etc) and create the native image
| FROM scratch
| COPY --from=builder /app/target/my-native-image /my-native-image
| ENTRYPOINT ["/my-native-image"]

- to build a statically linked native image:
  ...Luckily GraalVM has a way to also include the necessary system
  libraries in the static native image with musl libc:
  - In your Dockerfile download the musl bundle for GraalVM:
| RUN curl -L -o musl.tar.gz \
|     https://github.com/gradinac/musl-bundle-example/releases/download/v1.0/musl.tar.gz ⅋⅋ \
|     tar -xvzf musl.tar.gz
  And then add a native-image parameter that points to the extracted location of the bundle, like:
  Now your native image will include the standard library system calls that are needed!

- If AOT thing fails, it will fallback to just running the app in the JVM.
  To avoid it running on the JVM: 

- FAIL-FAST: Don't Defer Problems to Runtime
  - make sure native-image is NOT being run with any of these params:

- Reflection Woes:
  - reflection happens at runtime, making it hard for an AOT complier.
  - you can tell GraalVM about what needs reflection access,
    but this can quickly get a bit out-of-hand, hard to derive and maintain.
  - Micronaut and Quarkus do a pretty good job generating the reflection
    configuration at compile time but you might need to augment the
    generated config. (tricky with shaded transitive dependencies).

  - To reliably generate a reflection config you need to exercise as many
    execution code paths as possible, ideally by running unit/integration tests.
  - GraalVM has a way to keep track of reflection and output the configuration.
    - Run the app on GraalVM and use a special Java agent that will be able to
      see the reflective calls.
      - grab GraalVM Community Edition:
      - set JAVA_HOME and PATH.
      - from release assets grab the right native-image-installable-svm-BLAH.jar file
        and extract it in the root of your GraalVM JAVA_HOME directory.
      - run tests with parameter:
        (This will generate the reflection config (and possibly other configs for
         dynamic proxies, etc).
      - tell native-image about those configs, like:

   - For Quarkus ⅋ Micronaut see their docs (Quarkus / Micronaut) for details on
     how to add your own reflection config files.

- SpotBugs is a program which uses static analysis to look for bugs in 
  Java code. It is free software, distributed under the terms of the 
  GNU Lesser General Public License.

- SpotBugs is the spiritual successor of FindBugs, carrying on from the 
 point where it left off with support of its community. Please check 
 official manual site for details.

- SpotBugs requires JRE (or JDK) 1.8.0 or later to run. However, it 
  can analyze programs compiled for any version of Java, from 1.0 to 

- SpotBugs checks for more than 400 bug patterns. 

SpotBugs can be used standalone and through several integrations, including:
- Ant
- Maven
- Gradle
- Eclipse
Kryo serialization lib
- Object graph serialization library:
 Kryo is a fast and efficient binary object graph serialization 
framework for Java. The goals of the project are high speed, low 
size, and an easy to use API. The project is useful any time objects 
need to be persisted, whether to a file, database, or over the 

 Kryo can also perform automatic deep and shallow copying/cloning. 
This is direct copying from object to object, not object to bytes to 
Text file to String
_ https://howtodoinjava.com/java/io/java-read-file-to-string-examples/
Async Servlets 3.0+:
Real-World Java 9

Real-World Java 9:

Trisha Gee shows via live coding how we can use the new Flow API to 
utilize Reactive Programming, how the improvements to the Streams API 
make it easier to control real-time streaming data and how the 
Collections convenience methods simplify code. She talks about other 
Java 9 features, including some of the additions to interfaces and 
changes to deprecation.
Collectors (1.8+)
Optional (1.8+)
- Three of the new classes introduced in JDK 8 are 
DoubleSummaryStatistics, IntSummaryStatistics, 
andLongSummaryStatistics of the java.util package. These classes make 
quick and easy work of calculating total number of elements, minimum 
value of elements, maximum value of elements, average value of 
elements, and the sum of elements in a collection of doubles, 
integers, or longs. Each class's class-level Javadoc documentation 
begins with the same single sentence that succinctly articulates 
this, describing each as "A state object for collecting statistics 
such as count, min, max, sum, and average."
Netty: One FW to rule them all 
 by Norman Maurer
Inmmutable Objects are faster
(and safer)
JKube: WARs in containers cloud

In this article, you will learn how to deploy a Java web application (WAR)
into a Kubernetes using Eclipse JKube.

- JKube Maven  converts war (dependent of a containeer) into cloud-native app.

  - pom.xml:
    ˂!-- ... --˃
    ˂!-- ... --˃
      ˂failOnMissingWebXml˃false˂/failOnMissingWebXml˃  ← configure maven-war-plugin so 
                                                          that it won't fail due
      ˂!-- ... --˃                                        to a missing web.xml file.

      configure JKube to create service-resource manifest using NodePort as the spec.type.

          ˂artifactId˃kubernetes-maven-plugin˂/artifactId˃ ← Alt: openshift-maven-plugin.
        ˂!-- ... --˃

Java classes in the example project

example project contains three Java classes:
- ExampleInitializer: replaces standard WEB-INF/web.xml

  -  register Spring's DispatcherServlet without any 
     additional XML configuration:

     final AnnotationConfigWebApplicationContext context
            = new AnnotationConfigWebApplicationContext();
     final ServletRegistration.Dynamic dsr
            = servletContext.addServlet("dispatcher",
              new DispatcherServlet(context));

  - ExampleConfiguration: Spring-specific config enabling Spring MVC.

  - ExampleResource: standard Spring @RestController.

- Deploy to Kubernetes:

    $ mvn clean package    ← generate war in target/
    $ mvn k8s:build        ← Build docker image  (webapp/example:latest)
                             (Using jkube/jkube-tomcat9-binary-s2i by default)
                             Alternatives like Jetty can be used

    $ mvn k8s:resource     ← create required cluster config resource manifests 
    $ mvn k8s:apply        ← apply to kubectl– configured cluster
    $ kubectl get pod      ← Verify that app is running
    $ mvn k8s:log          ← Retrieve app Logs
JBang: Simplified Java
TODO: Recipient exceptions Add CalledFunctionException _
"CalledFunctionException" is an exception that is supposed to be
handled by the calling function. Its actually a CheckedException but more pedagogic.
Actually is not really an Exception and must be used just for long-lasting
(I/O or CPU intensive) functions.
Today we’re announcing a new beta release of Conclave, a platform 
that makes it easy to use secure hardware enclaves with Java. You can 
use enclaves to:
- Solve complex multi-party data problems, by running programs on a 
  computer that prevents the hardware owner from seeing the 
- Protect sensitive data from the cloud.
- Make your hosted service auditable and trustworthy.
- Upgrade privacy on distributed ledger platforms like Corda.
Checkpointing from outside of Java
UUID: a445afac-182d-11eb-b060-d788237e0853
When OpenJDK‘s Java virtual machine (JVM) runs a Java application, 
it loads a dozen or so classes before it starts the main class. 

- Standard JVM bootstrapt: It runs a method several hundred times 
  before it invokes the optimizing compiler on that method.
  This preparation is a critical component of Java’s 
  "write once, run anywhere" power, but it comes at the 
  cost of long startup times.

- new approach: warm up JIT compiler, and then checkpoint 
  application. Later on restore the checkpointed app.

BºWith these changes, we have seen applications that tookº
Bºseconds to start come up warm in milliseconds.º

- HOW-TO from command line:
  Java Native Interface (JNI) library allows to checkpoint/restore
  a JVM app from inside of your Java code.

  - JNI Checkpoint Restore library is based on Linux Checkpoint/Restore
    in Userspace (CRIU).
- JUnit extension for asserting JDK Flight Recorder events
  emitted by an application  identifying performance regressions
  (e.g. increased latencies, reduced throughput).

- JfrUnit supports assertions not on metrics like latency/throughput
  themselves, but on indirect metrics which may impact those.
  - memory allocation, 
  - database IO
  - number of executed SQL statements
  - ...

- JfrUnit provide means of identifying and analysizing such issues in 
  a reliable, environment independent way in standard JUnit tests, 
  before they manifest as performance regressions in production.