Azure (v 1.0)    [Azure Portal], [FULL REST API!!!], [Public Template Collection], [Online PowerShell console], [Cli get Started],
(Forcibly incomplete but still quite pertinent list of core people and companies)
REF: @[${wikipedia}/Microsoft_Azure#Key_people]
- Mark Russinovich: CTO, Microsoft Azure[84] @[${wikipedia}/Mark_Russinovich]
- Scott Guthrie   : Microsoft Executive Vice President of
                    Cloud⅋AI group
- Jason Zander    : Executive Vice President, Microsoft Azure
- Julia White     : Corporate Vice President, Microsoft Azure

- Michael Collier, Robin Shahan. Authors oF Fundamentals of Azure
Mind-map Services
Data Centers
└─ growing (˃28) regions WorldWide

GENERAL             COMPUTE                         NETWORKING                      STORAGE
├─ Dashboard        ├─ Virt. machines               ├─ Virtual networks             ├─ basics:blobs,tables,
├─ resource group   ├─ Virt. machines (classic)     ├─ Virtual networks (Classic)   │  queues,file-shares
├─ all resources    ├─ Virt. machines scale sets    ├─ load balancers               ├─ storage accounts
├─ subscriptions    ├─ container services           ├─ application gateways         ├─ storage accounts (classic)
├─ cost mgn + bill. ├─ batch accounts               ├─ Virtual network gateways     ├─ data lake store
├─ reservations     ├─ cloud services (classic)     ├─ local   network gateways     ├─ store simple device audits
└─ help + support   ├─ remoteapp collections        ├─ dns zones                    ├─ recovery service audits
                    ├─ container registries         ├─ route tables                 ├─ backup vaults (classic)
                    ├─ availability sets            ├─ cdn profiles                 ├─ site recovery vaults (classic)
                    ├─ disks                        ├─ traffic manager profiles     └─ import/export jobs
                    ├─ snapshots                    ├─ expressroute circuits
                    ├─ images                       ├─ network security groups           SOLUTIONS
                    ├─ disks (classic)              ├─ network security groups (classic) ├─ Power BI
                    ├─ vm images (classic)          ├─ network interfaces                │  └─ Simplified display
                    ├─ citrix xendesktop essentials ├─ public ip addresses               │     of data and charts
                    ├─ citrix xenapp essentials     ├─ reserved ip addresses (classic)   ├─ Office 365
                    └─ function apps                └─ connections                       └─ Microsoft Dynamcis
                                                                                            └─ Planning Software

WEB-MOBILE                        DATABASES                             INTELIGENCE + ANALYTICS
├─ app services                   ├─ sql ddbbs                          ├─ hdinsight clusters
├─ logic apps                     ├─ sql data-warehouses ddbbs          ├─ machine learning studio workspaces
├─ cdn profiles                   ├─ sql server stretch  ddbbs          ├─ stream analytics jobs
├─ media services                 ├─ azure cosmos db                    ├─ cognitive services
├─ search services                ├─ redis caches                       ├─ data lake analytics
├─ mobile engagement              ├─ data factories                     ├─ data factories
├─ api management services        ├─ azure ddbb for mysql servers       ├─ power bi workspace collection
├─ notification hubs              ├─ azure ddbb for postgresql servers  ├─ analysis services
├─ notification hubs namespaces   ├─ sql elastic pools                  ├─ data catalog
├─ integration accounts           └─ sql servers                        ├─ customer insight
├─ app services plans                                                   ├─ log analysis
├─ app services enviroments                                             ├─ machine learning studio web servic. plans
├─ api connections                                                      ├─ machine learning studio web services
├─ app service certificates                                             ├─ machine learning experimentation
├─ function apps                                                        └─ machine learning model management
└─ app service domains

Data+Analysis                                                                      App Development Tools
├─ SQL DDBB                             Monitoring+Management                      ├─ Team service accounts
├─ DocumentDB                           ├─ monitor                                 ├─ team projects
├─ Data Factoring                       ├─ application insights                    ├─ devtest labs
└─ Machine Learning                     ├─ log analytics                           ├─ application insights
IoT                                     ├─ automatino accounts                     ├─ api management services
├─ IoT Hub                              ├─ recovery service vaults                 ├─ Visual Studio
├─ Event hubs                           ├─ backup vaults (classic)                 └─ Notification Hubs
├─ Stream Analytics jobs                ├─ site recovery vaults (classic)
├─ Machine learning studio workspaces   ├─ scheduler job collections
├─ notification hubs                    ├─ traffic manager profiles                Other
├─ notification hubs namespaces         ├─ advisor                                 ├─ azure ad domain ser.
├─ mach. learn. studio web srv plans    ├─ intune                                  ├─ azure ddbb migration services
├─ mach. learn. studio web srv          ├─ intune app protection                   ├─ azure databricks
├─ function apps                        ├─ activity log                            ├─ batch ai
├─ machine learning experimentation     ├─ metrics                                 ├─ bot services
└─ machine learning model management    ├─ diagnostic settings                     ├─ classic dev srvs.
                                        ├─ alerts                                  ├─ cloudamqp
Enterprise Integrations                 ├─ solutions                               ├─ container services (Kubernetes "AKS", ...)
├─ Logic Apps                           └─ free services                           ├─ crypteron
├─ integration accounts                                                            ├─ device provisioning srvs.
├─ biztalk services                     Add-ons                                    ├─ devops projects
├─ services bus                         ├─ new relic accounts                      ├─ event grid subscriptions
├─ api management services              ├─ mysql databases                         ├─ event grid topics
├─ storessimple device managers         ├─ mysql databases clusters                ├─ genomics accounts
├─ sql server strech databases          ├─ sendergrid accounts                     ├─ location based services accounts
├─ data factories                       ├─ appdynamics                             ├─ logic apps custom connector
└─ relays                               ├─ aspera server on demand                 ├─ mailjet email service
                                        ├─ bing maps api for enterprise            ├─ managed applications
Security + Identity                     ├─ cloudmonix                              ├─ marketplace
├─ security center                      ├─ content moderator                       ├─ migration projects
├─ key vaults                           ├─ hive streaming                          ├─ nosql (document db) accounts
├─ azure active directory               ├─ livearena broadcast                     ├─ nuubit cdn
├─ azure ad b2c                         ├─ mycloudit - azure desktop hosting       ├─ on-premises data gateways
├─ multi-factor authentication (mfa)    ├─ myget-hosted nuget, npm, bower...       ├─ operation log (classic)
├─ user and groups                      ├─ pokitdok platform                       ├─ policy
├─ enterprise applications              ├─ ravenhq                                 ├─ power bi embedded
├─ app registrations                    ├─ raygun                                  ├─ recent
├─ azure ad connect health              ├─ revapm cdn                              ├─ resource explorer
├─ azure ad cloud app discorevy         ├─ signiant flight                         ├─ route filters
├─ azure add privileged identity mgn    ├─ sparkpost                               ├─ service catalog mnged appl. definitions
├─ azure ad identity protection         ├─ stackify                                ├─ service health
├─ azure information protection         ├─ deep security saas                      ├─ storage sycn services
├─ rights management (rms)              ├─ the identity hub                        ├─ tags
├─ access reviews                       └─ marketplace add-ons                     ├─ templates
└─ app service certificates                                                        ├─ time series insights enviroments
                                                                                   ├─ time series insights event sources
                                                                                   ├─ time series insights reference data sets
                                                                                   ├─ ...
                                                                                   └─ whats new
External Links
[Pricing  Calculator] @[]
[Azure Free Accounts] @[]
                      - 12 months free service
                      - €170 credits to test "anything" for 30 days
                      - 25 services free "for ever"
[Visual Studio Down.] @[]

[Azure JAVA SDK API ] @[]
Always-Free Products!!
- Cosmos DB (DDBB)
  5 GB
  400 request unnits

- App Service (Compute)
  10 web, mobile or API apps

- Functions (Compute)
  1.000.000 requests per month

- Event Grid (Integration)
  100.000 opertations/month

- AKS (Compute)

- DeveTests Labs

- Active Directory B2C (Identity)
  50.000 Authentications per month

- Service Fabric (Containers)

- Azure DevOps
  5 users (with unlimited private Git repos)

- Security Center (Security)
  Policy assessment and recommendations

- Advisor (Management and Governance)

- Load Balancer (Networking)
 Public load balanced IP (VIP)

- Data Factory (DDBB)
  4 activities low frequency

- Search (Containers)
  10.000 documents

- Notification Hubs (Containers)
  1.000.000 Push Notifications

- Batch (Compute)

- Automation: (management and governance)
  500 minutes of job runtime

- Data Catalog (Analytics)

- Virtual Network
  50 virutal networks.

- Inter-VNET data transfer
  Inbound only

- Bandwidth (Data Transfer)
  5 GB outbound

- Visual Studio Code

- Machine Learning Server
  Develop and run R and Python models
  on your platform of choice

- SQL Server 2017 Developer Edition

Governance Hierachy
- @[]
- @[]
- @[]

│  ─ Setup Azure AD ("OK" if already subscribed to Office 365).
│    This allows existing AD IDs to be used for Enterprise enrollment,
│    subscriptions, RBAC permissions, ...
│  ─ Optional. Enable (on─premises) Win.Server AD ←→ Azure AD  identity_synchronization
│    (probably using Azure AD Connect) to centralize:
│    ─ identity management.
│    ─ federation.
│    ─ºsingle sign─onº
│  ºAzure Enrollment Hierarchy: (Defined in Enterprise Portal)º
│                     FUNCTIONAL         BUSINESS     GEOGRAPHIC    RESPONSIBLE
│                                        UNIT                       PARTIES
│   ───────────────────────────────────────────────────────────────────────────────────
└── ENTERPRISE Gº*1º   Enterprise        Enterprise   Enterprise  ─ 1+Enter.Admin
                       Enrollment        Enrollment   Enrollment  - Create Dpt⅋Acc.Adms
                            │                │         │          - Read-only Admins
                            │                │         │            to acc.billing
                            │                │         │            ⅋purposes
    ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ┼  ─ ─ ─ ─ ─ ─ ─ ┼ ─ ─ ─ ──│─ ─       ─ ─ ─ ─ ─ ─ ─
    DEPARTMENTS        ┌────┴────┐       ┌───┴─...   ┌─┴─...
                    Accounting  IT    Consumer        US           ← Department Admins
                                      Electronics    East
                       │         │       │             │
    ─ ─ ─ ─ ─ ─ ─ ─ ─ ─┼─ ─ ─  ─ ┼ ─ ─ ─ ┼ ─ ─ ─ ─ ─ ─ ┼ ─ ─       ─ ─ ─ ─ ─ ── ─
    ACCOUNTS           │         │       │             │
              Gº*2º Account   Account  Account      Account        ← 1+ admins
    manages          Owner     Owner    Owner        Owner           linked to 1 email
    subcriptions       │         │       │             │            Belong to Azure AD
                       │         │       │             │            account or Microsoft
                       │         │       │             │            account (personal use)
                       │         │       │             │            Owns 1+ subscriptions
                       │         │       │             │           ºDefault service adminº
                       │         │       │             │           ºfor its subscriptionsº
    ─ ─ ─ ─ ─ ─ ─ ─ ─ ─│─ ─ ─  ─ ┼ ─ ─ ─ ┼ ─ ─ ─ ─  ─ ─│─ ─       ─ ─ ─ ─ ─ ─ ─ ─
                      ┌┴───┐     │      ┌┴...         ...    Subcription service admin:
  OºSUBSCRIPTIONSº  App1 App1 Product. App1                - Manage resources⅋resource governance.
                    Dev  Test Apps                         - A singleºservice administratorºis
                                                             defined through the Azure Account
                                                           - owners    can be defin in Azure Portal
                                                           - co-admins can be added in Classic Portal
                                                             (automatically added as subs. owners
                                                              in Azure Portal)
                    - single resource at a time.
    AZURE   PORTAL  @[]
                    - manage multiple resources within Resource groups
  - Fine Grained RBAC.
    - OWNER: Can modify resources in subscription.
    - Administrative: Co-administrator of the subscription ("billing stuff"?)
                      Can not change the sevice administrator or add/remove other co-administrators.

    -ºsubscription N → trusts ONLY ONE → Az.ADº
    -ºEach new subscription is provided with an existing/new AD to managerº
     ºadmins/co-adminis of such subscription.º
    -º"SILO" for billing, key limits and security boundaryº
    - Azure limits (number of cores/resources, etc.).
    - Contains and organizes all resources and establishes
      governance principles over them.
    -ºVNet-Peeringº allows forºnetwork links between subscriptionsº

Gº*2º:@[]. Account Portal

└─ºPOLICIESº(define key limits: number of VMs, storage accounts,...)
    ├─ Resource Tags
    │  │- @[]
    │  │- logical "container" for resources sharing
    │  │Bºcommon lifecycleº (web + ddbb + balancer +...)
    │  │  ^^^^^^^^^^^^^^^^
    │  │  Also used to group common policies, and access control
    │  │ (provision/update/decommission)
    │  │- Applications are commonly segregated into
    │  │  resource groups because they share a common
    │  │  Ex.:Blockchain Workbench resource-group
    │  │    @[]
    │  │- Maps roles/users to resources.
    │  │- can contain resourcesºlocated in different regionsº.
    │  │  (but the group itself belongs to a well-defined region)
    │  │- can be used to scope access control for admin. actions.
    │  │
    │  ├─ resource_1 ← │resource│ºN←─→1º│resource group│
    │  ├─ resource_2   └────────┘  🖒    └──────────────┘
    │  ├─ ...          Resources can be added/removed/moved to/from
    │  └─ resource_n   a resource_group at any time
    │     ^^^^^^^^^^
    │     they can interact with other resources in diferent resource groups.
    ├─ºRBACº: fine-grained access
    │  - can be assigned to a subscription and
    │    inherited by resource groups.
    │  - Admins can assign roles to users and groups at a specified
    │    scope.(subscription, resource group, resource)
    │    Ex:
    │    - user1 allowed to manage Virt.Mach. in subcription
    │      user2 allowed to manage DDBB       in subcription
    │  -ºRBAC flavorsº:
    │    ├ Owner      : Full access + grant/delegate to other users
    │    ├ Contributor: Full access
    │    └ Reader     : view resources
    │   º˃30 pre-built rolesº(Network|DB|Virt.Mach  Contributor, ...)
    ├─ºResource Locksº
    │  - Extra protection designed to
    │   ºprevent(accidental, non-desired) removalº of
    │    existing resources ("storage account", ...)
    │  - Apply to any scope
    ├─ºAzure Automationº
    └─ºAcure Security Centerº
Resources 101
ºWebsitesº - deployment - monitoring - worker roles ºVirtual Machinesº - create. - configure. ºStorageº - blobs - tables - queues - file shares ºVirtual Networksº - create virtual network - site-to-site and point-to-site networking - ExpressRoute. ºDatabasesº - Azure SQL Database - SQL Server in VM. ºManagement Toolsº - Visual Studio 2013 + Azure SDK. - Azure PowerShell cmdlets - Cross-Platform Command-Line Interface

                  ┌Azure Monitor────────────────────────────────────────────────────┐
                  │               ⎧ ┌──────────────────────────────────────────────┐│
                  │               │ │Insights                                      ││
                  │               │ │Application   Container   VM   Monit.Solutions││
                  │               │ └──────────────────────────────────────────────┘│
                  │               │ ┌──────────────────────────────────────────────┐│
                  │               │ │Visualize                                     ││
Application    ─┐ │ ┌─────────┐   │ │Dashboards    Views      PowerBI  Workbooks   ││
OS              │ │ │┌───────┐│   │ └──────────────────────────────────────────────┘│
Azure Resource  │ │ ││Metrics││   │ ┌──────────────────────────────────────────────┐│
Subscription    ├──→│└───────┘├──→┤ │Analyze                                       ││
Azure Tenant    │ │ │┌───────┐│   │ │Metric Analytics     Log Analysis             ││
Custom Source  ─┘ │ ││ Logs  ││   │ └──────────────────────────────────────────────┘│
                  │ │└───────┘│   │ ┌──────────────────────────────────────────────┐│
                  │ └─────────┘   │ │Respond                                       ││
                  │               │ │Alerts         Autoscale                      ││
                  │               │ └──────────────────────────────────────────────┘│
                  │               │ ┌──────────────────────────────────────────────┐│
                  │               │ │Integrate                                     ││
                  │               │ │Logic Apps     Export_APIs                    ││
                  │               ⎩ └──────────────────────────────────────────────┘│
ºAzure Advisorº
- Automated consulting resource that examines current configurations
  and make practical recommendations in the following areas:
  - High availability (HA)
  - Security
  - Performance
  - Cost: potential cost savings such as underutilized VMs.
ARM Policies:
BºARM custom policiesº
- explicit BºDENY mechanismº preventing users from
Bºbreaking oranization standardsº to access resources.
- Commonly used to:
  - enforceBºnaming conventionsº,
  - control resource-types that can be provisioned.
  - require resource tags
  - restrict provisioning locations.

- To create a custom ARM policy:
  - Create a policy definition (JSON file) describing the
    actions and/or resources that are specifically denied.
In PowerShell:
New-AzureRmPolicyDefinition \
   -Name ServerNamingPolicyDefinition \
   -Description “Policy to enforce server naming convention” \
   -Policy “C:\json\policyServerNaming.json”
Security Center
- best practice analysis and security policy management
  for all subscription (or resource group) resources.
- target: security teams, risk officers

- Application of advanced analytics (IA, behavioral analysis,...)
  leveraging global threat intelligence from Microsoft products
  and services, Microsoft Digital Crimes Unit (DCU), and Microsoft
  Security Response Center (MSRC), and external feeds.
provision VMs
BºCreation checklistº
  └ start with the virtual networks (vnets)
  └ Choose a good name convention. Ex:
  · dev  uw      01      product1 SQL
  · pro  ue      02      product2 web
  · qa   ...                  ...
  · ex: devusc-webvm01 (first dev web server hosted in us south central)
  · RºWARNº: up to 15 chars (win) | 64 chars (linux)
  └ set geographic region: west us, north europe, southeast asia, ...
    Rº:warnº: price differs between locations
    - Full list of Azure locations (2020-04)
      $ az account list-locations -o table
        eastasia        japanwest           uksouth            uaecentral
        southeastasia   japaneast           ukwest             uaenorth
        centralus       brazilsouth         westcentralus      southafricanorth
        eastus          australiaeast       westus2            southafricawest
        eastus2         australiasoutheast  koreacentral       switzerlandnorth
        westus          southindia          koreasouth         switzerlandwest
        northcentralus  centralindia        francecentral      germanynorth
        southcentralus  westindia           francesouth        germanywestcentral
        northeurope     canadacentral       australiacentral   norwaywest
        westeurope      canadaeast          australiacentral2  norwayeast

-Bºworkload(VM size) classificationº
   -Bºgeneral purpose   º: balanced cpu─to─memory ratio.
   -Bºcompute optimized º: high cpu─to─memory ratio.
   -Bºmemory optimized  º: high memory─to─cpu ratio.
   -Bºstorage optimized º: high disk throughput and io.
   -Bºgpu               º: targeted for heavy graphics/video editing.
                           model training and inferencing (deep learn)
   -Bºhigh performance  º: fastest cpu VMs with optional high─throughput
    Bºcomputes          º: network interfaces.

    ☞ note: VM size can be up/down-graded while the VM is running
            but resizing will Rºautomatically rebootº the machine and
          RºIP settings can changeº after reboot.
            - portal will filter out non-compatible choices with curent
              underlying hardware.
            - With stop/deallocate portal will allow any size for
              site/region, since VMs are removed and restarted in
              same/different server.

    └ separate costs for:
      · (windows OS licence is included in price)
      · └ºpay-as-you-goº:
      ·   priced per-hour,  billed per-minute
      · └ºreserved VM instancesº:
      ·   -  upfront commitment for 1/3 years in location
      ·   -Bºup to 72% price savingsº(vs pay-as-you-go)
      └ storage:
        ·  └Oºprovides access to subscription objects in A.Storage º
        ·  ·
        ·  └OºVMs always have 1+ Storage accounts to hold each VMº
        ·  ·
        ·  └ Ussually linked to a resource-group. When the is destroyed,
        ·  · so is the storage account.
        ·  · $ az storage account create \
        ·  ·   --resource-group rg01     \  ← link to this
        ·  ·   --name st01               \
        ·  ·   --location eastus2        \
        ·  ·   --sku standard_lrs
        ·  └Bºstorage account types:º
        ·  · -Bºstandardº: 500 i/o secs
        ·  · -Bºpremiumº : solid-state drives (ssds)
        ·  └ fixed-rate limit ofº20,000 I/O operations/secº per STORAGE ACCOUNT
        ·                       º^^^^^^^^^^^^^^^^^^^^^^^^^º
        ·                       ºor up to 40 standard VHDº
        ·                        ^^^^^^^^^^^^^^^^^^^^^^^^
        ·                        use more STORAGE ACCOUNTS for
        ·                        extra I/O | disks
        └ RºWARNº: storage resouces still billed when VMs are stopped/deallocated.
        · ☞At least 2 virtual hard disks(VHDs) per VM:
        ·  - disk 1: OS (tends to be quite small)
        ·  - disk 2: temporary storage.
        ·  - Additional disks:
        ·    - max number determined by VM Size selection
        ·    - data for VHD Bºheld in as page BLOBsº
        └ BºAzure allocates/bills space ONLY FOR THE STORAGE USEDº
        └·at disk creation Gºtwo optionsº available for VHD:
          -ºunmanaged disksº:
            -Rºowner responsible for STORAGE ACCOUNTSº.
            - pay for the amount of space used.
          -ºmanaged disksº:
            -Gºrecommended, managing shifted to Azureº
            - "you" specify the disk size (up to 4 TB)
              and Azure creates and manages disk and storage.

   -QºMarketplace imagesº ← ex: Wordpress = Linux+LAMP
   -Qºcustom 64-bit OS disk imageº:

VM quotas @[]
VM creation
→ @[]  → create resource
  → marketplace → search "windows server 2016 datacenter"
    → create → basics tab → check subscription
      → "create new resource group"
        → Fill params:
          - VM name
          - region
          - availability options
          - image
          - size
          - administrator user/password.
          - "inbound port rules":
             - rdp (3389)
             - http
             - ...
            → Click º"review + create"º
              → Testing connection to new instance:
              · → select "connect" on VM properties.
              ·   → "download rdp file"
              ·     → launch (local desktop) rdp client
              ·       and select:
              ·       "windows security" → "more choices"
              ·       → "use different account"
              → Optional: save Resource Group as ARM template:
                → ... VM instance → settings → automation script:
                  → "save resource template for later use"

A. VM extensions - small applications for simple provisioning automatition. - use case: install/configure additional software on new VM ºAFTER INITIAL DEPLOYMENTº - use specific configuration. - monitor and execute automatically. - trigger VM extension against (new|existing) VM deployment using cli, powershell, ARM templates or portal.
Automation services - Requires a new ºAzure Automation Accountº. - use case: "lot of infrastructure services" - higher-level automation service. - automate frequent, error-prone management tasks: -ºprocess management automationº: provides VM with Bºwatcher-tasksº reacting to specific Bºerror event/sº in the VM/datacenter. -ºAzure configuration managementº: - track new software updates for the OS, filtering in/out (including/excluding) specific updates. and react accordingly. - manages all VMs, pcs, devices, .. -ºupdate managementº: - enabled directly fromºAZURE AUTOMATION ACCOUNTºfor VMs or just a single VM from the VM blade in the portal. - manage updates and patches for your VMs. - assess status of available updates. - schedule installation. - review deployment results - provide (services) for process and
VMs availability Management

 │ FAULT │     │ FAULT │
 │DOMAIN1│     │DOMAIN2│

 ┊       ┊     ┊       ┊
 │ VM1   │     │       │BºVM4º and BºVM5ºin Fault Domain
 │       │     │ VM2   │  1/2 should be "mirrors" of
 │ VM3   │     │       │  each other
 │·BºVM4º│     │BºVM4º │ Availability ·
 │·BºVM5º│     │BºVM5º │ Set          ·
    ↑            ↑
- at least two instances of each VM
  per Avai.Set.

- VMs and their managed-disks in an Avai.Set are
Bºguaranteed to spread across DIFFERENTº:
  -Bºfault domainsº : sharing common power/network.
  -Bºupdate domainsº: sharing smae maintenance/reboot schedule.
  - avail. sets Rºdo not protectºagainst software/data errors.

A.Site RECOVERY - replicates physical/VMs/resources/hyper-V/VMware from site/location 1 to site/location 2. - enables Azure as recovery-destination. -Oºvery simple to test failoversº OºAFTER ALL, YOU DON'T HAVE A GOOD DISASTER RECOVERY PLANº OºIF YOU'VE NEVER TRIED TO FAILOVERº. - recovery plans can include custom powershell scripts, automation runbooks, manual steps, ...
Azure resource Manager (ARM)
 - ARM stands for Azure Resource Manager

   │Resource│ ← instantiates and provides Examples:
 ┌→│Provider│   operations to work with  ºRESOURCE PROVIDER           RESOURCEº
 │              resources                 -----------------           --------
 │              └───┬───┘                 microsoft.compute           VM
 │       ┌──────────┘                      storage account
 │       ↓                                microsoft.web               web apps related res.
 │  │Resource│ : Azure VM,Disk,Queue,...  microsoft.keyvault          vault related res.
 │       ↑                                microsoft.keyvault/vaults ← Full resource name
 │       └──────────────────┐                                         "provider"/"resource"
 │                       ┌──┴────┐
 │   │Resource│ ← set of resources
 │   │Group   │   sharing slife─cicle
 │     ↑
 │     └────────────────┐
 │               ┌──────┴──────┐
 │ BºARMºmanages resource groups through Resource Providers
 │                                       └───────┬────────┘
 │ │JSON template│ → input → Bº|ARM|º→ Evaluates input, dependencies and
 │                                    "forwards" actions in correct order to
 │                                     resource providers
 │                                     └───────┬────────┘

# tip: list existing providers $ az provider list | jq '.[]["namespace"]'

JSON templates - Parameterized "macros" to allow replicated re-use of resource deployments. - Templates areBºMaintained by Dev/DevOps in gitº. - Each new solution created in Portal automatically generates an associated JSON template. Click "download a template for automation" to fetch it. Ex.JSON template "resources": [ ┐ ┌→Bº│ARM│ { │ │ "apiversion": "2016-01-01", │ │ - parse input, map to REST request ┌─→ "type":º""º,│ │ to "" └───┬──────┘ · "name": "mystorageaccount", │ │ Resource manager · · "location": "westus", ├─┘ · · "sku": { "name": "standard_lrs" }, │ · · "kind": "storage", │ · · "properties": { } │ · · } │ · · ] ┘ · └··········································┐ · ┌──────────────────────────────────────│────────────────────────────────────────┘ v │ PUT ┌────────────────┴────────────────┐ ┌ ${BASE_URL}/providers/ºº/mystorageaccount?API-version=2016-01-01 | | { | "location": "westus", | "properties": { } | "sku": { "name": "standard_lrs" }, | "kind": "storage" | } | └ ${BASE_URL}:${subsid}/resourcegroups/${resgrpnm} ) - tip: you can create purupose-specific templates plus a BºMASTER TEMPLATEº linking all children. more info at: @[] - Bºtip: use |resource-groups| to group by common lifecycleº Bº and |tags| for any other grouping/classificationº
$ az VM create \ ← create new VM: --resource-group testresourcegroup \ ← link to existing Resource Group --name test-wp1-eus-VM \ --image win2016datacenter \ --admin-username jonc \ --admin-password areallygoodpasswordhere
-ºREST APIsº for complex scenarios (GET, PUT, POST, DELETE, PATCH) exists wrapping in client SDK ( c#, java, node.JS, python, ...) - ex. create new VM with ºº API: var Azure = Azure .configure() .withloglevel( httploggingdelegatinghandler.level.basic) .authenticate(credentials) .withdefaultsubscription(); // ... var vmname = "test-wp1-eus-VM"; Azure.virtualmachines.define(vmname) .withRegion (region.useast) .withExistingResourcegroup("testresourcegroup") .withExistingPrimaryNetworkInterface(networkinterface) .withLatestwindowsimage ("microsoftwindowsserver", "windowsserver", "2012-r2-datacenter") .withAdminusername ("jonc") .withAdminpassword ("areallygoodpasswordhere") .withComputername (vmname) .withSize (virtualmachinesizetypes.standardds1) .create(); GºApplying a ARM JSON templateº: -Gº STEP 1)º Edit/customize template (vim, VS Code+ARM extension, ...) -GºSTEP 2) create from template
$ az group deployment create \ ← deploy to reso. group --name $deploymentName \ --resource-group $resourcegroupname \ --template-file "AzureDeploy.JSON" ← alt 1: local input template (--template-uri https://... ← alt 2: http template) $ az storage account show \ ← check correct deploy --resource-group $resourcegroupname \ --name $storageaccountname
A.Disk encryption
- data at REST encrypted before writing

- main disk encryption technologies/options for VMs include:
  └ºstorage service encryption (SSE)º
    -ºmanaged by "storage account" adminº
    - Default in Bºmanaged disksº
    - 256-bit symmetric AES

  └ºAzure disk encryption (ADE)º
    - managed by VM owner.
    - bitlocker(windows), dm-crypt (linux)
    - VMs boot underºcustomer-controlled keys and policiesº
    - integrated with Azure key vault
    -Rºnot supported in basic tier VMsº
    -Rºnot compatible with  on-premises key management service (KMS)º

-ºpre-setup: create/update vaultº
$ºaz keyvault createº \ ← create (or update) keyvault --name Oº$KV_NAMEº \ in existing key-vault --resource-group $resrc_grp \ --location $region \ ← ºmust be in same region of VMsº cross regional limits). --enabled-for-disk-encryption true ← required for disk encryption
└───────────────────────────┘ three policies can be enabled. - disk encryption: - deployment : allow microsoft.compute resource provider to fetch secrets during resource creation. - template deploy: allow Azure resource manager to fetch secrets when referenced in a template deploy.
$ azºVM encryption enableº \ ← encrypt existing VM disk --resource-group $resource-group \ --name Oº$VM-NAMEº \ --disk-encryption-keyvaultOº$KV_NAMEº\ --volume-type [all | OS | data] \ ← encrypt all disks or just OS disk. --skipvmbackup $ az VM encryption show \ ← check result --resource-group $resrc_grp \ --name Oº$VM-NAMEº
-To decrypt:
$ azºVM encryption disableº\ --resource-group $resrc_grp \ --volume-type [all | OS | data] --name Oº$VM-NAMEº
batch (jobs) Service
- Use-case: High-level-workloads for specific tasks like rendering, BigData, ...

ºPRE-SETUPº                           ┌─────────────────────────────────┐
   - createºbatch account *1º         │  Azure storage                  │
    (az, portal, ...)                 │                                 │
   - set pool params (number+size of   │ 2)                ^ 4)
     nodes, OS image, apps to install  │ download input    │ upload task
     on nodes,...)º*5º                 v files and apps    │ output
   - setºpool allocation mode: *2º   ┌─┴─────────────────────────────────┐
   - (opt) associate storage acc.    │  Azure batch                      │
     to batch acc.*3                 │                                   │
                                     │Oºjobº sets the pool for its task  │
                    1)               │  computations                     │
                    create pool,     │  ┌─────────────────────────────┐  │
  ┌─────────────┐   jobs and tasks   │  │job             task    task │  │
  │             ├───────────────────→   └────────────────┼───────┼────┘  │
  │application  │                    │job n ←→ m pools*6 │       │       │
  │or service   │  3)monitor tasks   │  ┌────────────────┬───────┬────┐  │
  │             ├←──────────────────→   │                VM      VM*4 │  │
  └─────────────┘                    │  │ºpool of computer nodesº     │  │
                                     │  └─────────────────────────────┘  │
  when adding tasks to a job, the batch service automatically
  schedules the tasks associated application for execution.

  º*1º: 1+ batch account  per subscription.                  º*4º: every node is assigned a unique name
         (ussually 1 batch.acc per region)                      and ip address.  when removed from pool
         note: batch account n ←→ m batch workloads             node is resetted.
               batch account 1 ←→ n node pools (ussually 1)
    - batch account forms the base URL for auth:             º*5º: compute node type:
      https://{account-name}.{region-id}        - dedicated   :
                                                                - low-priority: cheaper
  º*2º: pool allocation mode:                                   target number of nodes:
      - batch service (def): pools allocated transparently      ^^^^^^
      - user subscription: batch VMs,... are created            pool might not reach the desired
        directly in your subscription when pool is              number of nodes.  (max quota
        created.                                                reached,...)
        required to use Azure reserved VM instances.
        - you must also register your subscription           º*6º:
          with Azure batch, and associate the account        - a pool can be created for each job   or
          with an Azure key vault.                           - a pool can be reused  for dif  jobs.
                                                             - jobs can be assigned priorities.
  º*3º: following storage account supported:                   tasks in jobs with higher priority are
    - general-purpose v2 (gpv2) accounts                       inserted into the queue ahead of
    - general-purpose v1 (gpv1) accounts                       others.  lower-priority tasks already
    - BLOB storage accounts (currently                         running are not preempted.
      supported for pools in VM config)

  - max. number of failed-task retries   - client app. can add tasks to a job or
    constraint can be set                  a job manager task can be specified
                                           with the needed info to create the
                                           required tasks for a job
                                           (job mgr taks required for jobs
                                            created by job schedule)

  - by default, jobs remain in             ºapplication packagesº
    active-state when all tasks complete.  - it allows to upload/manage multiple
    use 'onalltaskscomplete' property        app verssion run by tasks.
    change the default behaviour.          - application packages can be set for:
                                             - pool: app every node
                                             - task: app just  node/s
                                                     scheduled to run task

  resource quotas
  resource                             default limit   maximum limit
  batch accounts
  per region/per subscription          1 - 3           50
  dedicated cores per batch account    10 - 100        n/a
  low-priority cores per batch account 10 - 100        n/a  *1
  active jobs and job schedules        100 - 300       1000
  per batch account
  pools per batch account              20 - 100        500

  *1 if pool allocation mode == "user subscription"
     batch VMs and other resources are created directly
     in the user subscription at pool creating.
     subscription quotas for regional compute cores
     will apply

Note: tightly coupled applications can also be run using of the new
     ºtwo message passing interface(MPI) APIº: (Microsoft MPI or Intel MPI)
      as alternative to Batch jobs

run batch job (cli)
$ az group create \ ← create resource group --name rg01 \ --location eastus2 $ az storage account create \ ← create storage account --resource-group rg01 \ --name st01 \ --location eastus2 \ --sku standard_lrs $ az batch account create \ ← create batch account --name mybatchaccount \ --storage-account st01 \ ← link with storage account --resource-group rg01 \ --location eastus2 $ az batch account login \ ← authenticate to batch service. --name mybatchaccount \ --resource-group rg01 \ --shared-key-auth $ image="canonical:ubuntuserver:16.04-lts" $ a_sk_id="batch.node.ubuntu 16.04" $ az batch pool create \ ← create pool --id mypool \ --VM-size standard_a1_v2 \ --target-dedicated-nodes 2 \ ← pool node size --image ${image} \ ← (linux image) --node-agent-sku-id $a_sk_id $ az batch pool show \ ← check status --pool-id mypool \ --query "allocationstate" ← (pool created in 'resizing' state while nodes are bootstraped) $ az batch job create \ ← create job. --id myjob \ --pool-id mypool ← link to pool $ script="/bin/bash -c '...'" $ az batch task create \ ← create 1st task --task-id mytask01 \ --job-id myjob \ --command-line "${script}" $ az batch task show \ ← check task status --job-id myjob \ amongs output many details, --task-id mytask01 take note of 'exitcode' and 'nodeid'. $ az batch task file list \ ← list output files --job-id myjob \ --task-id mytask1 \ --output table $ az batch task file download \ ← download file --job-id myjob \ --task-id mytask1 \ --file-path stdout.txt \ ← remote file --destination ./stdout.txt ← local destination
create containerized solutions
AKS core concepts
- managed Rºsingle-tenantº kubernetes service
Oºpriceº: pay by number of AKS nodes running apps.
- built on top of OOSS A.container service engine
- creation params:
   - number of nodes
   - size   of nodes.
- aks upgrades orchestrated through cli or portal.
- cluster master logs available in Azure log analytics.

- the following Rºcompute resources are reservedº on each worker
  node for kubelet, kube-proxy, and kube-dns:
  · CPU    : 60ms
  · RAM    : Rº20% up to 4 gibº
             ex: RAM: 7GIB -Rº1.4GIBº(20%) = 5.6 GIB free for OS+Apps

- Rºresource reservations cannot be changedº

ºAKS node pools:ºnodes with same configuration

 AKS cluster 1 ←→ n node pools.
                - just 1 default-node-pool defined at AKS resource creation
                - scaling is performed against default-node-pool.

BºHA appº "==" deployment controller + deployment
                                       pod replicas set
                                     + container image/s
                                     + attached storage
                                     + ...
                                       ^deployment can be updated.
                                       dep.control. will take care
                                       or draining/updating

- Bºdeployment pod-disruption-budgetsºcan be set to define
  the ☞ Bºminimum quorum needed for correct app behaviourº.
  This is taking into account when draining existing nodes
  during update. Ex: A cluster of 5 nodes can have a maximum
  of 1 node failure, so controllers will get sure that no
  more than 2 nodes are drained.

-ºdefault namespacesº at cluster creation:
  ·ºdefaultº    : default ns for pods and deployments
  ·ºkube-systemº: reserved for core resources (dns,proxy,dashboard,...)
  ·ºkube-publicº: typically unsed, visible in whole cluster by any user.

  -ºk8s service accountsº:
    - one of the primary user types in k8s
    - targeting services (vs human admins/developers).
    - it  exists and is managed by k8s API.
    - credentials stored as k8s secrets.
      - can be used by authorized pods to communicate with API server.
        (authentication token in API request)

  -ºnormal k8s user accountº:
    - "traditional" access to human admins/developers
    - no account/password is stored  but external identity
      solutions can be "plugged in".
      ( directory in Azure aks)

  - AD-integrated aks allows to grant users or groups access to
    kubernetes resources per-namespace or per-cluster:
    -ºOpenID connect is used for authenticationº, identity layer
      built on top of the OAuth 2.0 protocol.
    - to verify OpenID auth-tokens the aks cluster used
      webhook token authentication. (webhook == "http callback")

    $ºaz aks get-credentialsº ← obtain kubectl configuration context.

  user are prompted to sign in with their Azure AD credentials
  before interacting with aks cluster through kubectl.

Bºk8s RBACº
  - granted per-namespace / per-cluster
  - granted per-user      / per-user-group
  - fine control to:
  -  user/groups n ←-→ m namsp.role  1 ←→ n permission grant
                    │   ( or             - create resource.^
                    │    clust.role)     - edit   resource.│
                    │                    - view app─logs   │
                    │                    - ...             │
                    │               ☞ no concept of deny ──┘
                    │                 exists, only grant

             (ns)   rolebindings
  - Azure RBAC:
    - extra mechanism for access control
    - designed to work on resources within subscription.
    - Azure RBAC roles definition outlines permissions to be applied.
    -  a user or group is then assigned this role definition for
       a particular scope (individual resource/resource-group/subscription)

Bºaks security concepts for apps and clustersº
  - master security:
    - by default kubernetes API server uses a public ip address+fqdn.
    - API control access includes RBAC and AD.
  - worker node security:
    - private virtual network subnet,no public ip
    - ssh is enabled by default to internal ip address.
    - Azure network security group rules can be used to further
      restrict ip range access to the aks nodes.
    - managed disks automatically encrypted at REST used for storage.
  - network security
    - Azure virtual network subnets: they require site-to-site vpn or
      express route connection back to on-premises network.
    - k8s ingress controllers can be defined with private ip
      addresses so services are only accessible over this
      internal network connection.

  -ºAzure network security groupsº
   - rules defining allowed/denied traffic based on source⅋destination ip ranges/ports/protocols
   - created to allow k8s-API-tls traffic and nodes-ssh access.
     modified automatically when adding load balancers, port mappings,
     ingress routes.

  -ºk8s secretsº
    - used to inject sensitive data into pods.
    - k8s API used to create new secret. "consumed" by pods/"deployments".
    - stored in tmpfs, never in disk. tmpfs deleted "as soon as possible"
      when no more pods in node require it.
    - only accesible within secrete namespace.

- HA apps network options:
  - load balance
  - set tsl ingress traffic termination or routing of multiple components.
    - for security reasons, network traffic flow must be restrictec
      into|between pods and nodes.

  - k8s provides an abstraction layer to virtual networking:
    WORKER                     K8S
   ºNODESº ── connected to ── Virt.Network
     └→ provides (in/out)bound rules to ºpodsº
        through kube-proxy

    - logically group pods to allow for direct access via an
      ip address or dns name and on a specific port.

  - aks allowsGºtwo network models for k8s clustersº:
    -Gºbasic networking   º: network resources created⅋configured at cluster creation.
                           (default) subnet names, ip address range can not be customized.
                           following features are provided:
                           - expose a kubernetes service externally|internally through
                             Azure load balancer.
                           - pods can access resources on the public internet.
    -Gºadvanced networkingº: aks cluster is connected to existing virtual
                            network resources and configurations.
                            this vir.NET provides automatic connectivity to other
                            resources and integration with a rich set of capabilities.
                            - nodes will use the Azure container networking
                              interface (cni) kubernetes plugin.
                            - every pod is assigned an ip address in the vir.NET.
                            - pods can directly communicate with other pods and
                              nodes in the virtual network.
                            - a pod can connect to other services in a peered
                              virtual network, including ºon-premises networksº
                              over expressroute and site-to-site (s2s) vpns.
                             ºpods are also reachable from on-premisesº.
                            - pods in a subnet that have service endpoints
                              enabled can connect to services like, SQL DB,....
                            - allows for user-defined routes (udr) to route traffic
                              from pods to a network virtual appliance.

  -ºIngress controllers:º
    - use case: complex application traffic control.
    - an ingress resource can be created with nginx, ..., or se
      with the aks http application routing feature.
      (an external-dns controller is also created and
       required dns a records updated in a cluster-specific
       dns zone)

  -k8s Bºvolumesº
  -k8s Bºpersistent volumesº
   - alternative to persistent volume
   - to define different tiers of storage (premium vs standard,
     disk vs file, ...)
     used normally for dynamic provisioning,

  - two initial storageclasses are created:
    ☞ºtake care when requesting persistent volumes so that theyº
     ºuse the appropriate storage needed.º
   ºdefaultº: use  Azure standard storage to create a managed disk.
              reclaim policy: Azure disk deleted@pod termination.
   ºmanaged-premiumº: use Azure premium storage to create managed disk.
              reclaim policy: Azure disk deleted@pod termination.
    new ones can be create through using 'kubectl'.

  name: managed-premium-retain
reclaimpolicy: retain    ← delete*|retain: what to do
                           with underlying storage
parameters:                (a.disk) when the pod is deleted
  storageaccounttype: premium_lrs
  kind: managed

apiversion: v1                       kind:ºpodº
kind:ºpersistentvolumeclaimº ←·····┐ apiversion: v1
metadata:                          · metadata:
  name: Azure-managed-disk         ·   name: nginx
spec:                              · spec:
  accessmodes:                     ·   containers:
  - readwriteonce                  ·     - name: myfrontend
  storageclassname: managed-premium·       image: nginx
  resources:                       ·       volumemounts:
    requests:                      ·       - mountpath: "/mnt/Azure"
      storage: 5gi                 ·         name: volume
                                   ·   volumes:
                                   ·     - name: volume
                                   ·       persistentvolumeclaim:
                                   └·····    claimname: Azure-managed-disk

  - manually scale pods⅋nodes
    - you define the replica or node count, and k8s schedules
      creation/draiing of pods
  - horizontal pod autoscaler (ºhpaº)
    - hpa monitors demand and automatically scale the number of replicas.
      by querying the ºmetrics APIº (k8s 1.8+) each 30secs
    - when using the hpa in a deployment, min⅋max number of replicas are set.
      optionally metric-to-monitor is also defined (cpu usage,...).
    - to minimize these race events (new metrics arrive before creation/draing
      has taken place) cooldown/delay can be set.
     (how long hpa will wait after first event  before another one is triggered)
      default: 3 min to scale up, 5min to scale down
  -ºcluster autoscalerº
    - it adjusts number ofºnodesº (vs pods) based on the requested compute resources
      in the ºnode poolº. it checks API server every 10 secs.
    - typically used alongsideºhpaº.
  -ºAzure container instance (aci) integration with aksº

  - burst to Azure container instances
    - rapidly scale aks cluster.
    - secured extension to aks cluster. virtual kubelet component is
      installed in aks cluster that presents aci as a virtual
      kubernetes node.

Bºdeveloper best practicesº
  - define pod resource requests and limits on all pods.
    (note:  deployment will be rejects otherwise if the cluster uses resource quotas)
    primary way to manage the compute resources. it provides good hints to
    the k8s scheduler

    kind: pod
    apiversion: v1
      name: mypod
      - name: mypod
        image: nginx:1.15.5
          requests:         ←         amount of cpu/memory needed  by pod.
            cpu: 100m
            memory: 128mi
          limits:           ← maximum amount of cpu/memory allowed to pod.
            cpu: 250m
            memory: 256mi

  - develop and debug applications against an aks cluster using dev spaces.
    - this ensures that RBAC, network or storage needs are ok before deployment
    - with Azure dev spaces, you develop, debug, and test applications directly
      against an aks cluster.
    - visual studio code extension is installed for dev spaces that gives an
      option to run and debug the application in an aks cluster:

  - visual studio code extension provides intellisense for k8s resources,
    helm charts and templates. you can also browse, deploy, and edit
    kubernetes resources from within vs code. the extension also provides an
    intellisense check for resource requests or limits being set in the pod

  - regularly check for application issues with 'kube-advisor' tool
    to detect issues in your cluster.
    - kube-advisor scans a cluster and reports on issues that it
deploy cluster(cli)
(related application code, dockerfile, and k8s manifest available at:

 $ az group create     \         ← create a resource group
   --name myakscluster \
   --location eastus

 $ az aks create  \               ← create aks cluster
   --resource-group myakscluster \
   --name myakscluster \
   --node-count 1
   --enable-addons monitoring \   ← availables in a.portal
   --generate-ssh-keys              after a few minutes
  (wait a few minutes to complete)  portal → resource group
                                     → cluster → monitoring → insights(preview)
                                       → choose "+add filter" → select namespace as property →
                                         → choose "all but kube-system"
 $ az aks install-cli command     ←       → choose "view the containers"install kubectl locally

 $ az aks get-credentials \       ← download credentials and
  --resource-group myakscluster \   configure kubectl touse them.
  --name myakscluster

 $ kubectl get nodes              ← verify setup
 (output will be similar to)
 → name                          status    roles     age       version
 → k8s-myakscluster-36346190-0   ready     agent     2m        v1.7.7

ºrun the applicationº

$ kubectl apply -f Azure-vote.yaml   ←──┐
→ deployment "Azure-vote-back" created  │
→ service "Azure-vote-back" created     │
→ deployment "Azure-vote-front" created │
→ service "Azure-vote-front" created    │
$ cat Azure-vote.yaml ←─────────────────┘
01	  apiversion: apps/v1
02	  kind:ºdeploymentº
03	  metadata:
04	    name: Azure-vote-back
05	 ºspec:º
06	    replicas: 1
07	    selector:
08	      matchlabels:
09	        app: Azure-vote-back
10	   ºtemplate:º
11	      metadata:
12	        labels:
13	          app: Azure-vote-back
14	     ºspec:º
15	        containers:
16	        - name: Azure-vote-back
17	         ºimage:ºredis
18	          resources:
19	            requests:
20	              cpu: 100m
21	              memory: 128mi
22	            limits:
23	              cpu: 250m
24	              memory: 256mi
25	          ports:
26	          - containerport: 6379
27	            name: redis
28	  --- #separator
29	  apiversion: v1
30	  kind:ºserviceº
31	  metadata:
32	    name: Azure-vote-back
33	  spec:
34	    ports:
35	    - port: 6379
36	    selector:
37	      app: Azure-vote-back
38	  --- #separator
39	  apiversion: apps/v1
40	  kind:ºdeploymentº
41	  metadata:
42	    name: Azure-vote-front
43	 ºspec:º
44	    replicas: 1
45	    selector:
46	      matchlabels:
47	        app: Azure-vote-front
48	   ºtemplate:º
49	      metadata:
50	        labels:
51	          app: Azure-vote-front
52	     ºspec:º
53	        containers:
54	        - name: Azure-vote-front
55	         ºimage:ºmicrosoft/Azure-vote-front:v1
56	          resources:
57	            requests:
58	              cpu: 100m
59	              memory: 128mi
60	            limits:
61	              cpu: 250m
62	              memory: 256mi
63	          ports:
64	          - containerport: 80
65	          env:
66	          - name: redis
67	            value: "Azure-vote-back"
68	  --- #separator
69	  apiversion: v1
70	  kind:ºserviceº
71	  metadata:
72	    name: Azure-vote-front
73	 ºspec:º
74	    type:ºloadbalancerº
75	    ports:
76	    - port: 80
77	    selector:
78	      app: Azure-vote-front

Bºtest the applicationº

  $ kubectl get service Azure-vote-frontº--watchº ← monitor deployment progress
  → name               type           cluster-ip   external-ip   port(s)        age
  → Azure-vote-front   loadbalancer   80:30572/tcp 2m
  →                                                ^
  →                                                initially "pending"

Bºdelete clusterº
  $ az aks delete \                    ← once finished delete the cluster
    --resource-group myresourcegroup \
    --name myakscluster --no-wait

  note: Azure active directory service principal used by the aks cluster is not removed.
publish image
BºAzure container registry overviewº
  - managed docker registry serviceºbased on ooss docker registry 2.0.º

  - Azure container registry build (acr build):
    - used to build container images in Azure.
      on-demand or fully automated builds from source code commit

    registry 1←→n repositories 1 ←→ n images
  - registries are available in three skus:
    - basic   : cost-optimized for learning purposes
    - standard: increased storage limits and image throughput.  (production)
    - premium : higher limits in storage (high-volume)/concurrent ops,
                geo-replication (single registry across multiple regions)

    - webhook integration is supported
    - registry authentication with AD, and delete functionality.
    - a fully qualified registry name has the form

    - access control is done through AD service principal or provided
      admin account.
      $ docker login' ← standard command to auth with a registry.

  -ºrepositoryº: a registry contains one or more repositories, which store
    groups of container images. a.cont.registry supports multilevel repo
    namespaces allowing to group collections of images related to a specific app,
    or a collection of apps to specific development or operational teams.
    ex:          ← corporate-wide image ← .NET apps build-image/s
                                                      across 'warranty' department ← web image, grouped
                                                       in the customer submissions app,
                                                       owned by 'warranty' department

  -ºimageº: read-only snapshot of a docker-compatible container.

  ºAzure container registry tasksº
  - suite of features within Azure container registry that provides streamlined
    and efficient docker container image builds in Azure.
    - automate container OS and framework patching pipeline,
      building images automatically when your team commits code
      to source control.
    - ºmulti-step tasksº (preview feature)provides step-based task
      definition and execution for building, testing, and patching container images
      - task steps define individual container image build and push
      - they can also define the execution of one or more containers, with
        each step using the container as its execution environment.

Bºdeploy image with cliº
  $ az group create \
    --name myresourcegroup \
    --location eastus

  $ az acr create \               ← create acr instance
    --resource-group myresourcegroup \
    --name mycontainerregistry007

  $ az acr login --name $acrname  ← log in to acr
                                    needed before push/pull

  $ query="[].{acrloginserver:loginserver}"
  $ az acr list \                      ← obtain full login server name
    --resource-group myresourcegroup \   of the acr instance.
    --query "${query}"
    --output table

  $ docker tag \                      ← tag image
  microsoft/aci-helloworld \        ← local existing image
  $acrloginserver/aci-helloworld:v1 ← new tag. fully qualified name of
                                      acr login server ($acrloginserver)

  $ docker push \                     ← push image to acr instance.

  $ az acr repository list \          ← check 1: image is uploaded
    --name $acrname --output table
  → result
  → ----------------
  → ...
  → aci-helloworld

  $ az acr repository show-tags \     ← check 2: show tag
    --name $acrname \
    --repository aci-helloworld
    --output table
  → result
  → --------
  → ...
  → v1

BºDEPLOY IMAGE TO ACIº (A.Container instance: "Docker?")
  $ az acr update \     ← enable the admin user on your registry
    --name $acrname \       ↑
    --admin-enabled true  ☞ in production scenarios
                            you should use a service
                            principal for container
                            registry access

  $ az acr credential show \    ← retrieve password
    --name $acrname \           ← once admin is enabled username == "registry name"
    --query "passwords[0].value"

  $ az container create \       ← deploy container image
     --resource-group myresourcegroup \
     --name acr-quickstart \
    --image ${acrloginserver}/aci-helloworld:v1
    --cpu 1 --memory 1 \        ← 1 cpu core, 1 gb of memory
    --registry-username $acrname \
    --registry-password $acrpassword
    --dns-name-label aci-demo --ports 80

  (an initial feedback from a.resource manager is provided
   with container details).

  $ az container show \         ← repeate to "monitor" container status
    --resource-group myresourcegroup \
    --name acr-quickstart \
    --query instanceview.state
    of your container and check

  $ az container show \                ← retrieve container's fqdn
    --resource-group myresourcegroup \
    --name acr-quickstart \
    --query ipaddress.fqdn             ←
  ex output:

  $ az container logs \                ← show container logs
    --resource-group myresourcegroup \
    --name acr-quickstart
Hybrid Cloud
@[">Microsoft Hybrid Cluod forEnterprise Architecs]
App Development: App Service Platform
Web apps
- (app service) ºweb appsº is the recomended service for webapps development.
  - for microservice architecture, (App Service)ºservice fabricº could be better.
  - for full control, A.VMs could be better.

-Bºapp service (webapps, service fabric, ...)plansº
  - An app runs in an app service plan.
  - a newºapp service planºin a region automatically creates new
    associated compute resources where all apps in plan will share resources.
    app service plan defines:
    - region (west us, east us, etc.)
    - number of VM-instances ← ºeach app will run on all the VMsº
                               ºmultiple app deployment slots also run on those VMsº
                               ºenabled logs-diagnosis/backups/webjobs also share the VMsº
    - size of VM instances (small, medium, large)
    - pricing : Determine price and available features.
               -ºshared compute:ºfree⅋shared base tiers, runs an app in
                               Rºmulti-app/multi-customer shared VMsº
                 -  cpu quotas for each app is allocated.
                 -Rºit doesn't scale outº
                 (target: development/testing/...)
               -ºdedicated compute:ºbasic/standard/premium/premiumv2
                 -ºapps in same app-service planºshare dedicated AzureºVMsº
                 -Bºthe higher the tier, the more VM instances to scale-outº
                 - dedicated VMs on dedicated virt.nets for apps
                 - use-case: app is resource-intensive, independent scale-out
                   from other apps in plan.
                 - the app needs resource in a different geographical region.
                 -Bºmaximum scale-out capabilitiesº
               -ºconsumption:ºonly available to function apps, that
                             Bºscale dynamically on demandº

create new ºBASH POWERSHELLº ----------------------------------------------------- ----------- $ gitrepo=.../Azure-samples/php-docs-hello-world $gitrepo=".../Azure-samples/app-service-web-dotnet-get-started.git" $ webappname=mywebapp$random $webappname="mywebapp$(get-random)" $ location=westeurope $location="west europe" $ az group create \ ← create resource group → new-Azurermresourcegroup --location $location \ -name group01 --name $webappname -location $location $ az appservice plan create \ ← create app service plan → new-Azurermappserviceplan --name $webappname \ -name $webappname \ -location $location --resource-group group01 \ -resourcegroupname group01 --sku free ← use free tier → -tier free $ az webapp create \ ← create web app → new-Azurermwebapp --name $webappname \ -name $webappname \ -location $location --resource-group group01 \ -appserviceplan $webappname --plan $webappname -resourcegroupname group01 $ az webapp deployment \ ← deploy code from → $propertiesobject = @{ source config \ git repo ┌·→ repourl = "$gitrepo"; --name $webappname \ │ branch = "master"; --resource-group group01 \ │ ismanualintegration = "true"; --repo-URL $gitrepo \ ← git source ········─┘ } --branch master \ --manual-integration set-Azurermresource -propertyobject $propertiesobject -resourcegroupname group01 -resourcetype microsoft.web/sites/sourcecontrols -resourcename $webappname/web -apiversion 2015-08-01 -force - Finally test deployed app with browser: http://$webappname.Azurewebsites.NET $ az group delete \ ← clean up after finishing. → remove-Azurermresourcegroup --name myresourcegroup -name group01 -force
Runtime patching
- OS patching includes:
  - physical servers.
  - guest VMs running App service.
- both OS and runtimes are automatically updated aligned
  with theºmonthly patch tuesday scheduleº.
- new stable versions of supported language runtimes
  (major, minor, or patch) are periodically added to app
  service instances.
  - some updatesºoverwriteºinstallation and apps
    automatically run in updated runtimes at restart.
    (.NET, php, java SDK, tomcat/jetty)
  - some others do a ºside-by-sideºinstallation.
    Devs/DevOps must manually migrate the app
    to the new runtime version. (node.JS, python)

    if app-service-setting was used to configure runtime,
    change it manually like:
  $º$ common="--resource-group $groupname --name $appname"       º
  $º$ az webapp config set --net-framework-version v4.7 $common  º
  $º$ az webapp config set --php-version            7.0 $common  º
  $º$ az webapp config set --python-version         3.4 $common  º
  $º$ az webapp config set --java-version 1.8           $common  º
  $º$ az webapp config set --java-container tomcat      $common  º
  $º                       --java-container-version 9.0          º
  $º$ az webapp config appsettings set --settings       $common \º
  $º  website_node_default_version=8.9.3                         º

ºquery OS/Runtime update status for instancesº
  go to kudu console →
  windows version     https://${appname}.scm.Azurewebsites.NET/env.cshtml
                      (under system info)
  .NET version        https://${appname}.scm.Azurewebsites.NET/debugconsole
                      $ powershell -command \
                      "gci 'registry::hkey_local_machine\software\microsoft\net framework setup\ndp\cdf'"
  .NET core version   https://${appname}.scm.Azurewebsites.NET/debugconsole
                      $ dotnet --version
  php version         https://${appname}.scm.Azurewebsites.NET/debugconsole,
                      $ php --version
  default node.JS ver cloud shell →
                      $ az webapp config appsettings list \
                        --resource-group ${groupname} \
                        --name ${appname} \
                        --query "[?name=='website_node_default_version']"
  python version      https://${appname}.scm.Azurewebsites.NET/debugconsole
                      $ python --version

    note: access to registry location
hkey_local_machine\software\microsoft\windows\currentversion\component based
servicing\packages, where information on “kb” patches is stored, is locked

in/out IPs APP-SERVICE ENVIRONMENT ----------- isolated: static, dedicated in/out-bound IPs provisioned. non-isolated: non-app-service-environments apps share network with other apps. in/out-bound ip addresses can be different and even change in certain situations. -ºinboundºIP address may change when: - app deleted/recreated in different resource group. - last app deleted/recreated in a resource group+region. - existing ssl binding deleted (ex: certificate renewal) TIP: Q: How to obtain static inbound IP: A: configure an ip-based ssl binding (self-signed is OK) withBºcertificate bound to an IP addressº forcing an static IP in App service provisioning. -ºoutbound ips rangeº: - each app has a set number of -unknown beforehand- Outbound ip addresses at any given time. To show the full set: $ az webapp show \ --resource-group $group_name \ --name $app_name \ --queryº"outboundipaddresses"º\ ← or "possibleoutboundipaddresses" --output tsv to show all possible ips regardless pricing tiers
Hybrid connections REF: relay-hybrid-connections-protocol - hybrid Con. provides accessºfrom your app to an application tcp:port endpointº: ┌ service ──┐ │ bus relay │ ┌─────────┐ ┌web app ┐ │ --\ /-- │ remote │- remote │ │ ←·connect.→ ---x--- ←···connect····→ service│ │ │ │ --/ \-- │ │- HCM │ └────────┘ └───────────┘ └──^──────┘ app serv. host create a tls 1.2 H)ybrid C)onnection M)anager tunnel from/to installed in remote tcp:port to/from (on-premise) service remote srv
A.traffic manage ºtraffic-manager.profile:º ←ºtraffic-managerºkeeps track of app service apps status 1+ app-service endpoints (running, stopped, or deleted) to route client requests ^ (status regularly communicated to profile) - apps in standard|premium mode - only 1 app-service endpoint allowed per region per profile. (remaining apps become unavailable for selection in profile). routing methods -ºpriority :º- use primary app for traffic, - provide backups in case of primary failure. -ºweighted :º- distribute (evenly|weighted) traffic across a set of apps -ºperformance:º- for apps in different geographic locations, use the "closest" (lowest net latency) -ºgeographic :º- direct users to apps based on geographic location their DNS query originates from. step 1) create profile in Azure traffic-manager step 2) add (app service|...) endpoints ☞To keep in mind: - failover and round-robin functionality only within same region (for premium mode is multi-region?) - Within a region, (App-service) app deployments using an app service can combine with anytoher service endpoints in hybrid scenarios. - (App service) endpoints set in traffic-manager-profile is visible (not editable) in app-configure page → profile → domain-names - after adding an app to a profile, the site URL on the dashboard of the app's portal page displays the custom domain URL of the app if set up, otherwise, traffic-manager.profile URL (ex, contoso.trafficmanager.NET) is shown. - both app direct-domain-name and traffic-manager URL are visible on the app's configure page under the domain names section. - DNS map must also be configured to point to traffic-manager URL. (custom domain names continue to work)
 - web role : view of content.
 - write-but-discard Oºcacheº of storage content
 - created asynchronously on-site startup.
 - when Oºcacheº is ready, site is switched to
   run against it.

 - benefits:
   - immune to storage-latencies.
   - immune to storage (un|)planned downtimes.
   - fewer app restarts due to storage share changes.

- d:\home  → local VM Oºcacheº,      ←···· limits:
             one-time copy of:             - def 300 MB
             d:\home\site\                 - max   2 GB

  d:\local → temporary VM-specific storage.

┌→ d:\home\logfiles  local VM app log files
├→ d:\home\data      local VM app data.
│  ^^^^^^^^^^^^^^^^
│  - copied in Bºbest-effort to shared content storeº periodically.
│    Rº(sudden crash can loose some data)º
│  - up to a one-minute delay in the streamed logs.
│  - app restart needed to update Oºcache-contentº if Bºshared storageº
│    is changed somewhere else.
└→ shared content store layout changes:
   - logfiles renamed → logfiles+"uid"+time_stamp.
   - data     renamed → data    +"uid"+time_stamp.
                                math a VM where app is running.
- Overview:
  - ASE: App Service Enviroment
  - provides Bºfully isolated/dedicated environment at at high scaleº.
  - windows web apps
    linux   web apps  (preview)
    mobile      apps
    docker containers (preview)
  - ases can be created  multi-region.
   (ideal for scaling stateless apps)
  - secure connections over VPNs to on-premises can be setup.
  RºWARN: own pricing tierº
      flat monthly rate for an ASE that pays for infrastructure
      (it doesn't change with the size of the ASE)
    + cost per App-service-plan vcpu.
      (all apps hosted are in isolated-pricing SKU).

  ºASES v2º:
  - provide a surrounding to safeguard your apps in
    a subnet of your network and provides your own private
    deployment of  app service.

- subscription 1 ←→ 1 ase 1 ←→ 0...100 App service plan-instances.
                     100 instances in   1 app service plan up to:
                       1 instances in 100 app service plan(single-instance)

                     - An app runs in an app service plan.
                     - a newºapp service planºin a region automatically
                       creates new associated compute resources where all
                         apps in plan will share resources.

-BºASE = front-ends         +  workersº  ←······· no management needed
        ^^^^^^^^^^             ^^^^^^^
   http/https termination      host customer apps in sizes:
   and load balancing          - 1 vcpu/ 3.5 GB RAM
   automatically added as      - 2 vcpu/ 7.0 GB RAM
   app service plans           - 4 vcpu/14.0 GB RAM
   scale out.
- Overview:
  - webjob = "background task"
  - program or script Bºin the same context as a web/API/mobile app.º
    .cmd/bat/exe (using windows cmd)
    .ps1 (using powershell)    .py  (using python)
    .sh  (using bash)          .JS  (using node.JS)
    .php (using php)           .jar (using java)

 - ☞ alternative: Azure functions. ("Evolution" of webjobs)

 -ºwebjob typesº
        CONTINUOUS                                TRIGGERED
│starts immediately when the webjob is  │ starts only when triggered  │
│created. to keep the job from ending,  │ manually or on a schedule.  │
│the program or script typically does   │                             │
│its work inside an endless loop. if the│                             │
│job does end, you can restart it.      │                             │
│runs on all instances that the web app │ runs on a single instance   │
│runs on. you can optionally restrict   │ that Azure selects for      │
│the webjob to a single instance.       │  load balancing.            │
│supports remote debugging.             │ doesn't support remote      │
│                                       │ debugging.                  │
  enable "always on" to avoid timeouts (not available in free tiers)

☞ note:
 webjobs canºtime out after 20 minutes of inactivity.º
 reset/reactive timeout withºgit deploymentºor to the web app's pages in
 theºportal reset the timerº
 requests to the actual site do not reset the timer.

ºcreating a continuous/manual webjobº
Azure portal
 → go to app service page → existing app service (web/API/mobile app)
  → select "webjobs"
    → select "add",
      Fill in requested settings and confirm:

      │name        │myjob01        │
      │file upload │ ← .zip file with executable/script plus supporting files
      │type        │continuous     ← continuous │ triggered
      │scale       │multi instance ← set multi(==all) or single instance.
      │(continuous │               │ only single in free/share tier
      │ type only) │               │
      │triggers    │manual         │
      │(triggered  │               │
      │ type only) │               │
       new webjob will appear on the webjobs page.

LogicApps vs WebJobs @[]
Deploy|WAR to App Serv.
Mobile Backend Support
backend howto
Backend will provide support for:
 - A.AD, OAuth 2.0 social providers, custom AA providers (with SDK)
 - Connection to ERPs.
 - offline-ready Apps with periodic Data Sync
 - Push notifications,
 - ...
 ºSTEP 1)ºcreate new Azure mobile app backend
  → portal → "create a resource" → search for "mobile apps" → "mobile apps quickstart"
    → Click on "create" and fill requested data:
      - unique app name: to beºpart of the domain nameºfor new app service
      - resource group : existing|new
      → press "create" and wait a few minutes for the service
        to be deployed
        → watch notifications (bell) icon for status updates.

 ºSTEP 2.1)ºconfigure the server project and connect to a database
→ portal → A.Services → "app services" → select a mobile apps back end → "quickstart"
  → select client platform (ios, android, xamarin, cordova, windows (c#)).
    → if database connection is not configured, create new one like:
      a. create a new SQL database and server.
      b. wait until the data connection is successfully created.
      → under 2. "create a table API", select node.JS for backend language.
        → accept the acknowledgment
          → select "create todoitem table"

 ºSTEP 2.2)ºdownload and run the client project once the backend is configured:
 - opt1: modify existing app to connect to Azure.
 - opt2: create a new client app. ex:
   - go back to "quick start" blade for your mobile app backend.
     → click create a new app → download uwp app template project,
       already customized to connect with mobile app backend.
     → (optional) add uwp app project to the same solution as the
       server project.
       this makes it easier to debug and test both the app and
       the backend in the same visual studio solution.
       (visual studio 2015+ required)
       → visual studio: press "f5" to deploy and run the app.

 ºSTEP 3)º deploy and run the app

push notifications
- Overview:
  - Service delivered through non-standardºplatform notification systemsº(BºPNSº).
    offeringºbarebone message push functionalitiesº to devices:
    - apple push notification service(apns)
    - firebase cloud messaging        (fcm)
    - windows notification service    (wns)

ºsequence diagram (summary):º
                                      ┌───┐4)store  ºpre-setup)º
┌──────┐ ────────────────────→ ┌────────┐ │  handle  1 client_app → Bºpnsº: request unique-push-handle
│mobile│  3)store pns handle   │app     │←┘                                        (and temporary)
│app   │                       │back─end│            2 client_app ← Bºpnsº: unique-push-handle
└──────┘ ←─────┐               └────────┘                                   ^^^^^^^^^^^^^^^^^^
  │ ^          │ 6)send to          │                                       uris   in wns
  │ │2)handle  │ device             │                                       tokens in apns
  │ │          │                    │ 5) message                            ...
  │ │          v                    │    handle      3 client_app → backend: unique-push-handle
  │ └─────── ┌──────────────┐       │                4 backend    → backend: store handle (in DDBB,provider,...)
  └────────→ │(p)lataform   │ ←─────┘
  1)request  │(n)otification│                       ºsending push messages)º
  pns handle │(s)ervice     │                        5 backend → Oºpnsº    :  (message, unique-push-handle)
             └──────────────┘                        6 Oºpnsº  → client_app: message (ussing ip net or
                                                                             phone network)
ºhow-to summaryº
   -ºpush notification extension packageºrequired
     (included in "quick-start server project" template)
     more info at:
     "work with the .NET backend server SDK for a.mob.apps"
    --- common steps -----------------------------------
    developer →      Azure: config. notification hub
                            Azure portal → "app services" → select existing app back end.
                              → under settings, select "push"
                                → select "connect" to add a ºnotification hub resourceº to the app
                                  (add new or reuse existing hub)
                                  later this notification hub is used to connect to a
                                  Oºpnsº to push to devices.

    developer → m.appstore: register app for push notifications
                            visual studio (2015+) → solution explorer
                            → right-click on the uwp app project
                             → click store
                              →ºassociate app with the store...º
                                in the wizard → click next → sign in (with  microsoft account)
                                reserve a new app name : "enter app name"
                                → click "reserve"
                                app registration will follow. after that
                                → select the new app name → click "next" → click "associate".
                               ºthis will adds the required microsoft store registrationº
                               ºinformation to the application manifest.º
                               navigate and sing-in to ºwindows dev centerº
                               → go to "my apps" → click "new app registration"
                                 → expand "services ˃ push notifications":
                                   click "live services" site under
                                   microsoft Azure mobile services@"push notifications" page,
                                 → write down the values under application secrets and the package sid ✍
                                   in the registration page, (used next to configure mobile app backend).─┐
                                 ⏿ (keep them safe)                                                       │
                                   application id + secret is used to configure ms account auth           │
    developer →  a.portal: configure the back end to send push notifications                              │
                           Azure portal → browse all → app services.                                      │
                            → select a mobile apps back end                                               │
                             → under settings, select "app service push"                                  │
                              → select your notification hub name.                                        │
                               → go to windows (wns): enter the                                      ←────┘
                                 security key (client secret) and package sid
                                 obtained from the live services site.
                                → click "save"
                                back end is now configured to use wns to send push notifications.

    developer → server app:  update the server to send push notifications
                          ºalt 1) .NET backend projectº
                          visual studio → right-click the server project
                           → click "manage nuget packages"
                            → search for "microsoft.Azure.notificationhubs" (client lib)
                             → click "install"
                              → expand controllers → open todoitemcontroller.cs:
                               → add the following using statements:
                                 | using system.collections.generic;
                                 | using microsoft.Azure.notificationhubs;
                                 | using;
                               → in the posttodoitem method, add the following code after
                                 the call to insertasync:
                                 | // get the settings for the server project.
                                 | httpconfiguration config = this.configuration;
                                 | mobileappsettingsdictionary settings =
                                 |    this.configuration.
                                 |      getmobileappsettingsprovider().
                                 |        getmobileappsettings();
                                 | // get the notification hubs credentials for the mobile app.
                                 | string notificationhubname = settings.notificationhubname;
                                 | string notificationhubconnection = settings.
                                 |    connections[mobileappsettingskeys].
                                 |     notificationhubconnectionstring].
                                 |      connectionstring;
                                 | // create the notification hub client.
                                 | notificationhubclient hub = notificationhubclient
                                 |   .createclientfromconnectionstring(
                                 |      notificationhubconnection,
                                 |      notificationhubname);
                                 | var windowstoastpayload = // define a wns payload
                                 |   @"˂toast˃˂visual˃˂binding template=""toasttext01""˃˂text id=""1""˃"
                                 |   + item.text + @"˂/text˃˂/binding˃˂/visual˃˂/toast˃";
                                 | try {
                                 |   // send the push notification.
                                 |   var result = await
                                 |      hub.sendwindowsnativenotificationasync(windowstoastpayload);
                                 |   // write the success result to the logs.
                                 | } catch (system.exception ex) {
                                 |   // write the failure result to the logs.
                                 |     .error(ex.message, null, "push.sendasync error");
                                 | }

                                 this code tells the notification hub to send a push notification after a
                                 new item is insertion.

                               → republish the server project.

                          ºalt 2. node.JS backend projectº
                           ( based on the quickstart project )
                           → replace the existing code in the todoitem.JS file with:
                             (when editing the file on your local computer (vs online editor), republish the server project)

                             | var Azuremobileapps = require('Azure-mobile-apps'),
                             | promises = require('Azure-mobile-apps/src/utilities/promises'),
                             | logger = require('Azure-mobile-apps/src/logger');

                             | var table = Azuremobileapps.table();

                             | table.insert(function (context) {
                             | // for more information about the notification hubs javascript SDK,
                             | // see
                             |'running todoitem.insert');

                             | // define the wns payload that contains the new item text.
                             | var payload = "˂toast˃˂visual˃˂binding template=\toasttext01\˃˂text id=\"1\"˃"
                             |               + context.item.text + "˂/text˃˂/binding˃˂/visual˃˂/toast˃";
                             | // execute the insert.  the insert returns the results as a promise,
                             | // do the push as a post-execute action within the promise flow.
                             | return context.execute()
                             |   .then(function (results) {
                             |     // only do the push if configured
                             |     if (context.push) {
                             |       // send a wns native toast notification.
                             |       context.push.wns.sendtoast(null, payload, function (error) {
                             |         if (error) {
                             |           logger.error('error while sending push notification: ', error);
                             |         } else {
                             | 'push notification sent successfully!');
                             |         }
                             |       });
                             |     }
                             |     // don't forget to return the results from the context.execute()
                             |     return results;
                             |   })
                             |   .catch(function (error) {
                             |       logger.error('error while running context.execute: ', error);
                             |   });
                             | });
                             | module.exports = table;

                             this sends a wns toast notification that contains the item.text when a new
                             todo item is inserted.

    developer → client app: add push notifications to your app
offline apps
- Overview:
  - push: send all tables, avoiding out-of-order execution
  - pull: performed on per-table customizable queries
          pull against locally modified table will triggers
          a push first to minimize conflicts
┌───────┐                          ┌────┐   ┌───────┐
│mobile │    →ºpushºCUD changes   →│REST│ → │remote │
│local  │      since last push     │API │   │backend│
│DDBB   │                          │    │   │DDBB   │
│storage│    ← ºpullº(query_name,  └────┘ ← └───────┘
└───────┘             query)
  ^       │                       │  non-null query name will force
  |       └───────────────────────┘  incremental sync. *1
  |         Oºsync contextº          'updatedate' from latest pull
  |            operation queue       is stored in the SDK local
  |            ordered cud list      system tables. further pulls
  |                                  retrieve from 'updatedate'
  |                                  client SDK uses sort itself
  |                                  ignoring 'orderby' from server
  |                                  query_name must be unique per app.
  |         - associated with aºmobile client objectº
clear stale data:               ├──────────────────┘
IMobileServiceSyncTable.        ↑
 ºpurgeAsyncº       ┌───────────┘
exception thrown if ├ imobileserviceclient.  (.NET client SDK)
ops awaiting sync   │ // init sync. context:
                    │ imobileservicessynccontext.
                    │   initializeasync(localstore)
                    ├ msclient
                    ├ ...
            - changes made withºsync tablesº
              tracked in sync context:
              - client controls when to sync
                (call to push local cud)

*1 if query has 1 parameter, one way to create a
   unique query name is to incorporate the parameter value.
   ex: filtering on userid:
   | await todotable.pullasync(
   |     "todoitems" + ºuseridº,
   |     synctable.where(u =˃ u.userid ==ºuseridº));

-ºUpdate client app for offline supportº

- init syncContext to local store.
- reference table/s through the IMobileServiceSyncTable interface.

STEP 1) install sqlite runtime for the universal windows platform:
  → visual studio → nuget package manager for uwp app project
   → (search and) install '' nuget package.
    → in solution explorer, right-click references → add reference... →
      universal windows → extensions:
      → enable 'sqlite for universal windows platform' and
               'visual c++ 2015 runtime for universal windows platform apps'
        → open mainpage.xaml.cs and uncomment line:
          #define offline_sync_enabled
         → press f5 to rebuild-and-run client app.

STEP 2) update the app to disconnect from the backend
  offline mode:
  when adding items in offline mode, exception handler
  serves to handle offline pipeline, adding
  new items added in local store.

  edit app.xaml.cs:
  comment out initialization to add invalid
  mobile app URL (simulate offline):
  |   public static mobileserviceclient
  |      mobileservice = new
  |       mobileserviceclient("https://foo");

  new item "save" push fails triggers "cancelledbynetworkerror" status, but new items exist in local store.
  if suppressing these exceptions client behaves as still connected to the mobile app backend.

STEP 3) update the app to reconnect to the backend
        at startup, 'onnavigatedto' event handler calls
  'initlocalstoreasync'  that in turn calls
  'syncasync'           to sync local store with backend

  restore correct URL of mobileserviceclient and rerun (f5)

  'updatecheckedtodoitem' calls syncasync to sync each
  completed item with the mobile app backend.
  'syncasync' calls both push and pull.

- when offline, normal CRUD operations work as if connected.
  Following methods are used to synch local store with server:
  - imobileservicessynccontext.ºpushasyncº:
  - imobileservicessynccontext.ºpullasyncº:
  started from a BºIMobileServiceSyncTableº.
  - pushothertables parameter controls whether other tables in the
    context are pushed in an implicit push.
  - query parameter takes an imobileservicetablequery or odata query
    string to filter the returned data.  the queryid parameter is used to
    define incremental sync.

-ºpurgeasync:ºapp should periodically call it to purge stale data.
  use 'force' param to purge any changes not yet synced.
create A.App-service API apps
ºAPI management overviewº

  once ready product
  is published           created by admins
  ↓                      ↓
  product    ← relN2M →  API  1 ←→ n operations
  ^            --------              ^
  open or      title                 |
  protected    description           |
  |            terms-of-use          |
  |                                  |
  |                                  |
  |                                  |
  developers                         API contains a reference to the
  1) subscribe to product            back-end service that
     (for protected products)        implements the API, and its
  2) call the API operation/s        operations map to the
                                     operations implemented by the
                                     back-end service.

ºAPI gatewayº :                          ºA. portalº (API admins)
 - end-point for API calls                - define or import API schema.
   → route to backend.                    - package APIs into products.
 - verifies API keys/JWT/certs/...        - set up policies like quotas
 - enforces usage quotas and rate limits.   or transformations on the APIs.
 - transforms API on the fly (no code,    - get insights from analytics.
   API or operation level)                - manage users.
 - caches backend responses
 - logs call metadata

ºdeveloper portalº (API consumer developers)
 - read API documentation.
 - try out an API via the interactive console.
 - create an account and subscribe to get API keys.
 - access analytics on their own usage.
 - view/call operations
 - subscribe to products.
 - developer portal URL  indicated in:
   Azure portal → API management service instance → dashboard

product   → grant visibility to ─┐
                                 ↑ │
developer → belongs to  n ───────┘ │
  ─ immutable system groups: ←─────┘
    ºadministratorsº: subscription admins.
                      - manage API-management service instances:
                        create APIs/operations/products
    ºdevelopersº    : authenticated developer portal users.
                      - access to developer portal
    ºguestsº        : unauthenticated developer portal users
                      - can be granted certain access (read only)

  ─ custom groups: (or existing AD tenants groups)
    use case example: custom group for developers affiliated with
    a specific 3rd organization allow access to APIs from a
    product containing relevant APIs only.

- created or invited to join by admins.
- sign up from the developer portal.
- when developers subscribe to a product they are granted the
 ºprimary and secondary keyºfor the product used to make calls
  into its APIs.

- allow A.portal to change the behavior of the API
  through configuration.
- collection of statements executed sequentially on
  the request or response of an API.
- popular statements include:
  - format conversion (xml to JSON)
  - call rate limiting , number of incoming calls from a developer.
  - :..
- policy expressions can be used as attribute values or
  text values

ºAPI management terminologyº
-ºbackend APIº- an http service that implements your API and its
-ºfrontend API/APIM APIº-
Bºan APIM API does not host APIs, it creates facades for your APIsº
  in order to customize the facade without touching back end API.
-ºAPIM API operation : each APIM API contains a reference to the
  back end service implementing the API, and its operations map
  to the operations implemented by the back end service.

ºcreate new A. APIM service instanceº
  portal → create a resource → integration → API management:
     fill form:
     - name   ← used also as default domain name
     - subscription 	
     - resource group
     - location
     - organization name
     - administrator email
     - pricing tier        (developer tier to evaluate the service)
     → click on  create.
       (wait up to 15 minutes)

ºCreate new APIº
Azure portal → select existing APIM instance
 → select APIs (under API management)
  → select "+ add API" (left menu)
    → select "blank API" from list.
     → enter settings for the API.
       name         value
       display name "blank API"

       web service  "" ← leave empty for mock up
       URL scheme   "https"

       URL suffix   "hbin"         it identifies this specific
                                   API in this apim instance.

       products     "unlimited"    if you want for the API to be
                                   published and be available to
                                   developers, add it to a
                                   product.  you can do it
                                   during API creation or set it
       note: by default, each API management instance comes with
       two sample products: starter and unlimited.

      → select "create"
        at this point,Rºyou have no operations in apim that map toº
      Rºthe operations in your back-end API.ºif you call an
        operation that is exposed through the back end but not
        through the apim, you get a 404.

  ☞ note: by default, when you add an API, even if it is connected to
    some back-end service, apim will not expose any operations
    ºuntil you whitelist them.º
     whitelist the operation of back-end service by creating a
     apim operation that maps to the back-end operation.

     →ºadd and test a parameterized operationº
      → select the API just created
       → click "+ add operation".
        → in the URL, select get and enter /status/{code} in the resource.
          optionally, provide info associated to {code} param.
          (number for type, def. value ,...)
         → enter fetchdata for display name.
          → select "save"

     →ºtest an operationº
       Azure portal (alternatively use developer portal)
       → select the "test tab"
        → select "fetchdata"
         → press "send"
         response to operation is displayed

ºcreate and publish a productº
→ select "products" (menu left)
 → select "+ add" and fill:
   display name
   state                  ← press "published" to publish it
   requires subscription  ← check it if user is required to
                            subscribe before using it
   requires approval      ← check it to make admin review it
                            and accept/reject subscription
                            (auto-approved otherwise).
   subscript.count limit  ← limit simultaneous subscriptions
   legal terms
   APIs                   ← API list to include in produc
  → select "create"

 ☞ tip: you can create or update user's subscription to
        a product with custom subscription keys through
        REST API or powershell command.
Swagger API docs
ASP.NET C# how-to with Swashbuckle
 (asp.NET core) main components:
   swagger object model and middleware to expose swaggerdocument
   objects as JSON endpoints.

   builds swaggerdocument objects directly from
   routes, controllers, and models.
   typically combined with the swagger endpoint middleware to
   automatically expose swagger JSON.

   swagger ui tool embedded version.
   - it includes built-in test harnesses for
     the public methods.

-ºpackage installationº
  visual studio → → view → other windows
  → alt.1: package manager console
    → nav to the dir containing todoapi.csproj:
      → execute:
        $ install-package Bºswashbuckle.aspnetcoreº

  → alt.2: from "manage nuget packages" dialog
    right-click the project in solution explorer
    → manage nuget packages
     → set the package source to “”
         enter Bºswashbuckle.aspnetcoreº in search box
         selectbºswashbuckle.aspnetcoreº package from
         browse tab and click install

- ºadd and configure swagger middlewareº

  (startup configureservices method)
  public void configureservices(iservicecollection services) {
    services.adddbcontext˂todocontext˃(opt =˃

    servicesº.addswaggergenº(c =˃        // ← STEP 1) register swagger generator
    {                                    //           by defining 1+ swagger docs.
      new info {
        title = "my API",
        version = "v1"

  using swashbuckle.aspnetcore.swagger;   // ← STEP 2) import in info class:

  (startup.configure method)
  public void configure
  (iapplicationbuilder app) {
      appº.useswagger()º;                 //← STEP 3) enable middleware to serve
                                          //          generated swagger as JSON endpoint.

      appº.useswaggeruiº(c =˃ {           // ← STEP 3) enable swagger-ui
           "my API v1");
        // c.routeprefix = string.empty; ← Un-comment to serve ui at /

  STEP 4) test setup: launch the app, and navigate to
     http://localhost:"port"/swagger  ← ui

- ºdocumenting the object model - API info and descriptionº
 the configuration action passed to the addswaggergen method adds
 information such as the author, license, and description:

 // register the swagger generator, defining 1+ swagger docs.
 services.addswaggergen(c =˃
     c.swaggerdoc("v1", new info
         version = "v1",
         title = "todo API",
         description = "a simple example asp.NET core web API",
         termsofservice = "none",
         contact = new contact
             name = "shayne boyer",
             email = string.empty,
             URL = ""
         license = new license
             name = "use under licx",
             URL = ""

- ºenabling XML commentsº
STEP 1) configure project to enable xml output doc.
   visual studio → solution explorer → right-click project
   → select edit → "project_name".csproj and add next lines :

       ˂nowarn˃$(nowarn);1591˂/nowarn˃ ← semicolon-delimited list
   ˂/propertygroup˃                      enclose the code with #pragma
                                         to suppress specific code like
                                         #pragma warning disable cs1591
                                         public class program { .... }

   enabling xml comments provides debug information for undocumented
   public types and members. undocumented types and members are
   indicated by the warning message.

STEP 2) configure swagger to use the generated xml file.

  public void configureservices(iservicecollection services) {
    services.adddbcontext˂todocontext˃(opt =˃
    services.addswaggergen(c =˃ {             // ←ºregister swagger generator,º
      c.swaggerdoc("v1", new info {           //  ºdefining 1+ swagger  docs  º
          version = "v1",
          title = "todo API",
          description = "...",
          termsofservice = "none",
          contact = new contact {
              name = "shayne boyer",
              email = string.empty,
              URL = "https://..."
          license = new license {
            name = "...", URL = "https:..." }

      var xmlfile = // set comments path for swagger JSON/ui.
      var xmlpath = path.combine(appcontext.basedirectory, xmlfile);

Bºreflection is used to build an xml file name matching that of theº
Bºweb API project.º the appcontext.basedirectory property is used to
  construct a path to the xml file.

Qº/// ˂summary˃                    º      ←······· triple-slash enhances swagger ui
Qº/// deletes a specific todoitem. º               by adding description to section header.
Qº/// ˂/summary˃                   º               add
Qº/// ˂param name="id"˃˂/param˃    º               swagger ui will translate Qº˂summary˃º to:
  [httpdelete("{id}")]                             the ui is driven by the generated JSON schema:
  public iactionresult delete(long id) {           "delete": {
    var todo = _context.todoitems.find(id);          "tags": [ "todo" ],
    if (todo == null) { return notfound(); }       Qº"summary": "deletes a specific todoitem.",º
    _context.todoitems.remove(todo);                 "operationid": "apitodobyiddelete",
    _context.savechanges();                          ...
    return nocontent();                            }

  /// ˂summary˃
  /// creates a todoitem.
  /// ˂/summary˃
Gº/// ˂remarks˃                   º                         ← Add action method documentation,
Gº/// sample request:             º                           supplementing ˂summary˃ information
Gº///                             º                           text, JSON, or xml allowed
Gº///     post /todo              º
Gº///     {                       º
Gº///        "id": 1,             º
Gº///        "name": "item1",     º
Gº///        "iscomplete": true   º
Gº///     }                       º
  /// ˂/remarks˃
  /// ˂param name="item"˃˂/param˃
  /// ˂returns˃a newly created todoitem˂/returns˃
  /// ˂response code="201"˃returns the newly created item˂/response˃
  /// ˂response code="400"˃if the item is null˂/response˃
Bº/// ˂returns˃a newly created todoitem˂/returns˃ º          ← ºdescribing response typesº
Bº/// ˂response code="201"˃returns new item˂/response˃º
Bº/// ˂response code="400"˃if the item is null˂/response˃º
  public actionresult˂todoitem˃ create(todoitem item) {
      return createdatroute("gettodo", new { id = }, item);

-ºdecorating the model with attributesº

  using system.componentmodel;
  using system.componentmodel.dataannotations;  // ← defined attributes providing
                                                //   hints to swagger ui components.
  namespace todoapi.models {
      public class todoitem {
          public long id { get; set; }
          [required]                            // ← atribute
          public string name { get; set; }

          [defaultvalue(false)]                 // ← atribute
          public bool iscomplete { get; set; }
Azure Functions
  - use-case: run small pieces of code ("functions").
              c#, f#, node.JS, java, or php.
              "unix text utils on the cloud".

                                storage    ← support BLOB/queue/file/table
                                account      triggers and logging function
                                 ↑ 1         executions depends on storage.
                                 ↓ 1
 │binding│   n...0  ←·····→    │function│   1 ←·····→   1   │trigger│
  └──┬──┘                       └───┬──┘                     └──┬──┘
optional declarative way            N                 start code-execution.
to connect to input/output          ┇                 - Contains
data from within code.              1                            └────┬─────┘
                               │function App│                  "Ussually", not
                                                               always == payload
                                                               input to funct. →  │ function │ → output data
                                  Return value of function,
                                  and/or "out" parameters in
                                  C#/C# script.

(RºWARN:º☞ hosting plan can NOT be changed after Func.App creation)
  --------------------------------- ·  ------------------------------------
  - hosts added/removed on demad    ·  -Bºneeded when function runs forº
    based on ºinput events rateº    ·   Bºmore than 10 minutesº
  - pay for function exec. time.    ·  - run functions as web apps.
  - execution Rºstops afterº        ·  - create/reuse App Service from apps
  Rºconfigurable timeoutsº          ·    at no additional cost.
    - 5 minutes def,                ·  - can scale between tiers to allocate
    -Rº10min max                    ·    different amount of resources.
    - Tunned in functionTimeout@    ·  -BºSupports linux.º
      "host.JSON" project file      ·  -Bº"always-on" must be enabledºsince
  - VMs running functions           ·  Rºfun.runtime goes idle after a fewº
  Rºlimited 1.5 GB RAMº             ·  Rºminutes of inactivityº.
                                    ·    Only http triggers will "wake up"
                                    ·    the functions.
  - Fun.code is Bºstored on º
  BºA.filesºshares associated
    to the Fun's main storage account.
          Rºwarn:ºdeleting it deletes
                  the code.
  - all functions in a Func. app     ←BºScale controllerº monitor and
    share resources within an          applies Bºtrigger-type heuristicsº
    instance and scale                 to determine when to scale
    simultaneously. Distinct           out/in. Ex: In a queue
    Func.Apps scale independently.     storage next tuple is ussed:
                                       (queue size, age-of-oldest-msg)
    - Func.App scales up to º200 max.instancesº.
    - A single instance may process
     "infinite/no-limit" messages
    -ºnew instancesº will be
      allocatedºat most once every 10 secs.º

- Existing TriggerºTEMPLATESº:
  -ºhttpTriggerº    :
  -ºtimerTriggerº   :
  -ºgithub webhookº :
  -ºBLOBtriggerº    : on consumption plan, there can be up to a
                      10-minute delay in processing new BLOBs
                      when function app has gone idle. switch
                      consumption to app service plan +
                      "always on" enabled, or use the event
                      grid trigger.

  -ºqueuetriggerº (messages arriving to º queueº)
  -ºeventhubtriggerº (events delivered to aºa.event hubº)
  -ºtwilio (sms messages) triggerº

BºTrigger/Bind Exampleº
  - Triggers and Bindings are configured either with:
    -Bº"function.JSON" fileº                    (← Azure Portal)
    -  code decorator attributes in code params (← C#/C#Script)
       and functions.

  BºExample 1. Using Azure Portalº
    -BºInput message → function → write row to A.Tableº
       ^^^^^^^^^^^^^              ^^^^^^^^^^^^^^^^^^^^
       queue-storage              table-storage
       trigger                    output binding
       └────┬──────┘              └─────┬──────┘
            Useªºfile 'function.JSON'ºto define them
                To edit the file go to A.Portal
                  → ...  → function → "advanced editor"
                                      @"integrate tab"

      └ Ex: 'fuction.JSON':
          "bindings": [
          {                          ←BºInputºbinding (== "trigger")
          ·º"type": "queuetrigger",º
     ┌→   · "name": "order",         ← Fun.param receiving
     ·    ·                   input
     ·    · "direction": "in",       ← always 'in' for triggers
     ·    · "queuename": "queue01"   ← queue "ID" to monitor
     ·    · "connection": "storage01"← pre-setup in app setting
     ·    · "datatype": "string"     ← optional. One of:
     ·    ·                            string | byte array | stream
     ·    ·                                     ^^^^^^^^^^
     ·    ·                                  binary,custom type,
     ·    ·                                  deserialize POCO
     ·    }
     ·    ,
     ·    {                          ←Bºoutputºbinding
     ·    ·º"type": "table",º
     ·    · "tablename": "outtable",
    ┌·→   · "name": "$return",       ← how fun.provides output
    |·    ·                            "out" param available in C#/C#Script
    |·    · "direction": "out",      ← in|out|inOut
    |·    · "connection": "conn01"   ← pre-setup app setting,
    |·    ·                            avoiding to store secrets in JSON.
    |·    }
    |·    ]
    |·  }
    |·└ Ex. Function (C# script)
    |·  #r "newtonsoft.JSON"
    |·  using microsoft.extensions.logging;
    |·  using newtonsoft.JSON.linq;
    |·  public class person {
    |·      public string partitionkey { get; set; }
    |·      public string rowkey { get; set; }
    |·      public string name { get; set; }
    |·      public string mobilenumber { get; set; }
    |·  }
    |└ · · · · · · · · · · · · · · · · · · ┐
    |   public static person run(jobject order, ilogger log) {
    └→    return new person() {
            partitionkey = "orders",
            rowkey = GUID.newGUID().tostring(),
            name = order["name"].tostring(),
            mobilenumber = order["mobilenumber"].tostring() };

  BºExample 2: Using code decorator attributes inº
  Bº           code params and functions (C#/C#Sharp)º

    -BºInput message → function → BLOB
       ^^^^^^^^^^^^^              ^^^^
       queue-storage              output
       trigger                    binding

      [return: BLOB("output-container/{id}")]
      public static string run(
        [queuetrigger("inputqueue")] workitem input,
        ilogger log)
          string JSON = string.format("{{ \"id\": \"{0}\" }}",;
          log.loginformation($"c# script processed queue message.
          return JSON;

- expressions resolving sources to values:
  ex 1:
  BLOB.path property = container/º{queuetrigger}º
                           expression resolves to
                           message.text. Ex. for
                           message "helloworld", BLOB
                           "container/helloworld" is

  └ bindingºexpressions typesº
    └ app settings: (secrets, ENV.VARs, ...)
    ·ºpercent signsº(vs curly braces) used. Ex:
    · º%env_var01%º/newBLOB.txt
    ·  ^^^^^^^^^^^
    ·  local-run values come from
    · º'local.settings.JSON'º
    └ trigger filename: BLOB path for trigger
    └ trigger  metadata (vs data payload):
    · - It can be used as input C# params|properties
    ·   in context.bindings object in javascript.
    ·   ex:
    ·   a.queue storage trigger supports the following
    ·   metadata properties:
    ·   (accessible in "function.JSON")
    ·   - queuetrigger     - insertiontime
    ·   - dequeuecount     - nextvisibletime
    ·   - expirationtime   - popreceipt
    ·   - id
    └ JSON payloads:
      - can be referenced in configuration for other
        bindings in the same function and in function
        - ex: "function.JSON" file for a webhook function
              that receiving
              { Qº"BLOBname"º:"helloworld.txt" }

        "bindings": [
          {                                  ← Input trigger
          · "name": "info",
          ·º"type": "httptrigger",º
          · "direction": "in",
          · "webhooktype": "genericJSON"
          · "name": "BLOBcontents",
          · "type": "BLOB",
          · "direction": "in",               ← Input from BLOB
          · "connection": "webJobsStorage01" ← PRE-SETUP value
          · "path": "strings/{BLOBname}",  ← !!!
          {                               ← Output result binding
          · "name": "res",
          · "type": "http",
          · "direction": "out"

        "type": "BLOB",
        "name": "BLOBoutput",
        "direction": "out",
        "path": "my-output-container/{datetime}"    ←  2018-02-16t17-59-55z.txt
        "path": "my-output-container/{rand-guid}"
Best Practices
  - avoid long running functions
  - cross function communication:
  Bºdurable-functions and Logic-apps are built to manage stateº
  Bºtransitions and communication between multiple functions. º
  BºIn any other case, it is generally a best practice to useº
  Bºstorage queues for cross function communicationº
    ☞ Remind: messages limited to 64 KB.
    - Use service bus queue for larger sizes
      - up to 256 KB in standard tier
      - up to   1 MB in premium tier
    - service bus topics (vs queue) also allows
      for message filtering before processing.

    - event hubs prefered (to storage queue and service-bus)
      if very-high-volume communications is expected.

  - write functions to be Bºstateless and idempotentº (if possible).
    adding state information with input/output data.

  - write defensive functions:
    - assume exception arise at any time (external involved
      services errors, networking outages, quota limits, ...).
    - design for resiliance to failures.
      ex: function logic:
        - query 10_000 rows in DB.
        - create queue message for each row
        Defensive apporach:
        - track row "completed". If function fail at 5000,
          that will allow to avoid 5000 duplicates.

    - allow no-op for duplicated inputs.

  └  re-use, share and manage connections.
  └  don't mix test+production code in the same function.
  └Bºif a shared assembly is referenced in multiple .NET functions, º
   Bºput it in a common shared folder. reference the assembly with aº
   Bºstatement similar to (using c# scripts .csx):º
   Bº#r "..\shared\myassembly.dll"º in order
     to accidentally deploy multiple versions of same binary.
   ☞ Remember: All functions in a funct. App share the same resources.

  └  Skip Rºverbose logging in production codeº.

  └  use async code, avoid (thread-blocking) calls
     -RºAvoid referencing result property / calling wait methodº
        on a task instance that could lead to Rºthread exhaustionº.

  └  Use message-batchs when possible (event hub,...)
     Max batch size can be configured in "host.JSON"
     as detailed in reference documentation.

  └  C#: Use strongly-typed array.

  └  configure host.JSON behavior (host runtime, trigger behaviors)
     to better handle concurrency.
     - concurrency for a number of triggers can be tunned,
       often adjusting the values in these options.
     ☞ settings apply to all functions within the app.
       within a single instance of the function.
       you have a function app with:
       - 2 http functions, each scaling to 10 instances, sharing
         the same resources.
       - concurrent requests set to 25
       Any incomming request to any http trigger would count towards
       the shared 25 concurrent requests.
       The 2 functions would effectively allow 250 concurrent requests
       (10 instances * 25 concurrent requests per instance).
Durable fun.
  - functions and webjobs Bºextension allowing to writeº
  Bºcomplex stateful functions in a serverless environmentº
    local state is never lost on process recycle|VM reboots.
  - Can be seen as an "orchestrator of functions".
  - Provides for stateful workflows in an Bº"orchestrator function"º.
  - With Durable Functions, it’s possible to make functions
    depended on each other, with separated billing costs.
    Ex: Function A takes 300 milliseconds to execute,
        but it calls Function B, taking another extra
        2000 milisecs to complete. Function A is only
        charged for 300 millisecs.
  - orchestrator functions advantages:
    -Bºdefine workflows in codeº(vs JSON schemas or designers)
    - Code can sync/async call other functions.
      Return value can be saved to local variables.
      -BºAutomatic progress checkpoint when function awaitsº.

BºUse Casesº:
  └ pattern #1: function chaining in order.
    often output of one function needs to be applied
    to the input of another function.  (a la Unix Pipeline).
    Ex code:
    public static async task˂object˃
      run(BºDurableOrchestrationContext ctxº) {
        try {
            // ☞ "f1", "f2", ... : names of other functions in
            // funct-app
            var x = awaitBºctxº.callactivityasync˂object˃("f1");
            var y = awaitBºctxº.callactivityasync˂object˃("f2", x);
            var z = awaitBºctxº.callactivityasync˂object˃("f3", y);
            return  awaitBºctxº.callactivityasync˂object˃("f4", z);
                     ^     └┬┘
                     │ ☞ Bºctx allow to invoke other functions   º
                     │   Bºby name, passing params, and returningº
                     │   Bºfunction output.º
                    await triggers a progress-checkpoint instance.
                  Bºif the process|VM recycles the execution,    º
                  Bºit iwll resumes from theºpreviousºawait call.º
        } catch (exception) {
          ... handling/compensation goes here

    note: subtle differences exists between C#←→C#Script.
    C# requires durable parameters to be decorated with respective
       (Ex [orchestrationtrigger] for durableOrchestrationContext
       param).  Otherwise runtime can not inject variables into
       the function.

  └ ºpattern #2: fan-out/fan-inº
    execute multiple functions in parallel, then wait
    for all to finish. (often some aggregation work is done on
    individual results).
    - with non-durable functions, fanning out can be done by
      sending multiple messages to a queue while fanning back
      is much more challenging:
      code must track when queue-triggered functions ends, and
      store function outputs.
    - with non-durable functions, fanning out can be done by
                                  custom mechanism (queues,...)
    - with     durable functions, code is much simpler. Ex:
      public static
      async task run(DurableOrchestrationContext ctx) {
          var paralleltasks = new list˂task˂int˃˃();

          // get a list of n work items to process in parallel
          object[] workbatch = await Bºctxº.

          for (int i = 0; i ˂ workbatch.length; i++) {
              Task˂int˃ Bºtaskº = Bºctxº.       //  ← fan-out work distributed
                   callActivityAsyncº˂int˃      //     to N f2 instances
                   ("f2", workbatch[i]);
              paralleltasks.add(task);          // ← dynamic list trace tasks

          await taskBº.whenAllº(paralleltasks); //  ← wait for all called fun. to finish.
                                                //    Similar to barrier shell synchronization
          int sum = paralleltasks.              // ← Aggregate all outputs
                    sum(t =˃ t.result);
          await ctx.                            // ← send result to f3
                callactivityasync("f3", sum);

  └ ºpattern #3: async http APIsº
     coordinating state of with external clients long-running operations.
     Solution 1:
     - Trigger long-running action in (quick-to-return) http call,
       leaving operation runnning in remote client.
     - poll periodically some end-point to learn when operation completes.

     durable functions simplify this pattern:
     - once an instance is started, Durable functions
       automatically exposes a  webhook http APIs that
       clients can use to query the progress.
       - STEP 1) start an orchestrator/query its status.
         $ BASE_URI="https://myfunc.Azurewebsites.NET/"
       $º$ curl -x post ${BASE_URI}/orchestrators/dowork    º
       $º  -h "content-length: 0" -i                        º
           → http/1.1 202 accepted
           → content-type: application/JSON
           → location:
           → https://...
           → {"id":º"b79b...."º, ...}

         $ STATUS=${BASE_URI}/admin
         $ STATUS=${STATUS}/extensions/durabletaskextension/
                       Automatically exposed by Dura.Fun.engine.
         $ STATUS=${STATUS}/b79b... -i
       - STEP 2) Query status
       $º$ curl ${STATUS}                                   º
           → http/1.1 202 accepted
           → ...
           → {"runtimestatus":Gº"running"º,"lastupdatedtime":"...", ...}

       - STEP 2) (let time pass)
       $º$ curl ${STATUS}                                   º
           http/1.1 200 ok
           {"runtimestatus":Bº"completed"º,"lastupdatedtime":"...", ...}

     - http-triggered function to start a new orchestrator
       function instance wrapping function º"wrappedFunction"º.
       public static
       async task˂httpResponseMessage˃ run(
         httprequestmessage req,
         stringºwrappedFunctionº, ← value taken from incoming URL
         ilogger log) {
         // wrapped function name comes from input URL.
         // function input comes from the request content.
         dynamic eventdata = await req.content.
         string instanceId = await Bºstarterº. // ← Start Orchestra.
         return starter
                .createCheckStatusResponse(req, instanceId);

  └ºpattern #4: monitoringº
    - flexible recurring process in a workflow, ex:
      polling until condition is/are met
    - A regular timer-trigger works for simple scenarios,
      with static intervals. Managing instance lifetimes can
      become complex.
    - durable functions allows for flexible recurrence intervals,
      task lifetime management, and the ability to create multiple
      monitor processes from a single orchestration:
      Ex: reversing pattern #3 async http API scenario.
           instead of exposing an endpoint with status
           wrapping a "wrappedFunction" for external
           clients, make the long-running monitor consume
           an external endpoint, waiting for some state change.
      - multiple monitors can be set observing arbitrary endpoints
        in few lines-of-code.
      - monitors can end execution when some condition is met or
        terminated by the DurableOrchestrationClient.
      - wait interval can be changed based on some condition
        (exponential backoff)
        Ex: C# script
        public static
        async task run(BºDurableOrchestrationContext ctxº) {
          int jobid           = Bºctxº.getinput˂int˃();
          int pollinginterval = getpollinginterval();
          datetime expirytime = getexpirytime();
          while (Bºctxº.currentutcdatetime ˂ expirytime) {
            var jobstatus = await
                           ("getjobstatus", jobid);
              if (jobstatus == "completed") {
                await Bºctxº.callActivityAsync(
                   "sendalert", machineid);
              // sleep until this time
              var nextcheck = Bºctxº.currentUTCDatetime
              await Bºctxº.
                 createTimer(nextcheck, cancellationtoken.none);
          // ... further work here or orchestration end

  └ºpattern #5: (slow)human interactionº
    People are not highly available and responsive as computers.
    orchestrator will use a durable timer to request approval
    and escalate in case of timeout.
    It waits for an external human generated events.

    Ex C# script:
    public static
    async task run(BºDurableOrchestrationContext ctxº) {
      using (var timeoutcts = new cancellationtokensource()) {
        Datetime duetime = Bºctxº.currentUTCDatetime.addhours(72);
        task durabletimeout = Bºctxº.createtimer(duetime,

        task˂bool˃ approvalevent =

        if ( approvalevent ==
                await taskBº.whenAnyº(approvalevent, durabletimeout)) {
            await Bºctxº.callactivityasync("processapproval",
        } else {
            await Bºctxº.callactivityasync("escalate");
    an external client can deliver the human generated event notification
    to a waiting orchestrator function using either the built-in http
    APIs or by using DurableOrchestrationClient.raiseEventAsync API
    from another function:
    public static async task run(string instanceid,
      DurableOrchestrationClient client) {
        bool isapproved = true;
        await client.raiseeventasync(instanceid, "approvalevent", isapproved);

- behind the scenes, durable functions extension is built on top of
  the durable task framework, an open-source library on github for
  building durable task orchestrations.
-Bºmuch like how Azure functions is the serverless evolution of   º
 BºAzure webjobs, durable functions is the serverless evolution ofº
 Bºthe durable task framework.                                    º
-OºIt is heavily within Microsoft and outside as well to automateº
 Oºmission-critical processes.º

- It reliably maintain execution state using a design pattern known
  as Bºevent sourcingº:  append-only-store recording full series
  of actions taken by the function orchestration.
  Benefits include:
  - performance, scalability, responsiveness
    compared to "dumping" full runtime state.
  - eventual consistency for transactional data
  - full history and audit trails enabling reliable compensating
  - Event sourcing by this extension is transparent. under the
    covers, the await operator in an orchestrator function yields control
    of the orchestrator thread back to the durable task framework
    dispatcher. the dispatcher then commits any new actions that the
    orchestrator function scheduled (such as calling one or more child
    functions or scheduling a durable timer) to storage. this transparent
    commit action appends to the execution history of the orchestration
    instance. the history is stored in a storage table. the commit action
    then adds messages to a queue to schedule the actual work. at this
    point, the orchestrator function can be unloaded from memory. billing
    for it stops if you're using the Azure functions consumption plan.
    when there is more work to do, the function is restarted and its
    state is reconstructed.
  - once an orchestration function is given more work to do
   (for example, response msg received or a durable timer expires),
   the orchestrator wakes up again and re-executes the entire function
   from the start in order to rebuild the local state.
   if during this replay the code tries to call a function
   (or do any other async work), the durable task framework consults
   with the execution history of the current orchestration.
   if it finds that the activity function has already executed
   and yielded some result, it replays that function's
    result, and the orchestrator code continues running. this
    continues happening until the function code gets to a point
    where either it is finished or it has scheduled new async work.

  - the replay behavior creates constraints on the type of code that
    can be written in the function. For example, orchestrator
  Bºcode must be deterministicº, as it will be replayed multiple
    times and must produce the same result each time.

- º(2020-03) LANGUAGE SUPPORTº
  - C# (functions v1 and v2)
  - F# and javascript (functions v2 only)
  -  support for all languages planned.

  - Durable functions extension transparently uses queues, tables,
    and BLOBs to persist execution history state and trigger function
  -RºA separate storage account can be needed due to storageº
   Rºthroughput limits.º

  - queue messages: Used to schedule activity and receive responses.
    In the consumption plan, these queues are monitored by
    the Azure functions scale out/in compute instances
  - table storage : store execution history for orchestrator accounts.
    Tool like "microsoft Azure storage explorer" can be used to see
    the history.
  -  storage BLOBs: used primarily as leasing mechanism to coordinate
                    the scale-out of orchestration instances across
                    multiple VMs.
                    Also used to hold data for large messages which
                    cannot be stored directly in tables or queues.
Quarkus Azure Functions
ºquarkus-azure-functions-httpº extension allows to write
microservices with RESTEasy (JAX-RS), Undertow (servlet),
Vert.x Web, or Funqy HTTP and make these microservices
deployable to the Azure Functions runtime.

One azure function deployment can represent any number of JAX-RS,
servlet, Vert.x Web, or Funqy HTTP endpoints.
App Automation
- Automation Runbooks (Python/Powershell/GUI) automate resource management.
- Capabilities:
  - Process Automation:  Orchestation
  - Configuration Management: Collect inventory, track changes, desired state
  - Update Management: Assess compliance, scheduled updates
  - Windows/Linux, Azure/On-Premises.

- In addition, PowerShell Desired State Configuration (DSC) is available

Runbook Gallery -@[] contains runbooks for common tasks (shutting down, deploying VMs...)
A.table (NoSQL)
Azure table storage overview
ºintroduction to table storage in Azureº
a.table storage (datastore) service:
- stores large Bºstructuredº RºNoSQLº (key/attribute) data,
  (Rºschemaless designº)
- access is fast and cost-effective (vs SQL)

- ºlimitsº:
  - up to the capacity limit of the storage account.
  - any number of entities in a table
  - any number of tables,
  - tables scale on demand.
  - entity size: up to 1mb in table storage
                 up to 2mb in Cosmos DB
                 up to 252 properties.

- Azure Cosmos DB table API: premium offering :
  - throughput-optimized tables
  - global distribution
  - automatic secondary indexes.
  - (Cosmos DB allows also for SQL and other DDBB model

- it  accepts authenticated calls from in/out-side
ºcommon use cases:º
 - storing tbs of structured data
 - datasets that ºdon't require complex joins,º foreign keys,
   or stored procedures and Rºcan be denormalizedº for fast access
 - quickly querying data using a clustered index
 - accessing data using the odata protocol and linq
   queries with wcf

storage account 1 ←→ n table ←→ 1←→n entity 1←→ 1 property set
                                                  (up to 252 keys)

http://${storage_acc}  ${table} ← a.table storage
http://${storage_acc}${table} ← a.Cosmosdb table API
                      direct access with ºodata protocolº.
                      more info at:

- all access to Azure storage   is done through a Bºstorage   accountº
- all access to Azure Cosmos DB is done through a Bºtable API accountº

- each entity also has three system properties that specify:
 -ºpartition keyº:Bºentities with the same partition key º
                  Bºcan be queried more quickly, and     º
                  Bºinserted/updated in atomic operationsº
 -ºrow key      º: unique id within partition.
 -ºtimestamp    º: last-modified timestamp (lmt) used to
                   manage optimistic concurrency.
                   table REST API "etag"  = lmt
                   (used interchangeably)

ºchoosing table storage or Cosmos DB table APIº
"Cosmos-DB-table-API" and "table-storage" SDKs:
-  share  same table data model

           |table storage              | Cosmos DB
           |                           | table API
latency    |fast/but no upper bounds   | ˂10-ms reads
           |                           | ˂15-ms writes
           |                           | (99th percentile)
           |                           | any scale/worldwide
throughput |variable model             | highly scalable with
           |tables limit: 20,000 op/s  | dedicated reserved throughput
           |                           | per table that's backed by
           |                           | slas. accounts have no upper
           |                           | limit on throughput and
           |                           | support ˃10 million
           |                           | operations/s per table.
global     |single region              | turnkey global distribution
distribu.  |opt: 1 readable secondary  | from 1 to 30+ regions.
           |    read region for HA     | automatic/manual failovers
           |you can't init failover    | any time, anywhere
indexing   |only primary index         | - automatic, complete indexing
           |onºpartitionkeyºandºrowkeyº|   on all properties.
           |no secondary idxs.         | - no index management.
query      |pk index can be used.      | queries can take advantage of
           |scans otherwise.           | automatic indexing on
           |                           | properties
consistency|strong   within primary reg| five well-defined consistency
           |eventual within secondary "| levels to trade off
           |                           | availability, latency,
           |                           | throughput, and consistency
           |                           | based on your application
           |                           | needs.
pricing    |storage-optimized.         | throughput-optimized.
slas       |99.99% availability.       | 99.99% availability sla for
           |                           | all single region accounts
           |                           | and all multi-region accounts
           |                           | with relaxed consistency, and
           |                           | 99.999% read availability on
           |                           | all multi-region database
           |                           | accounts industry-leading
           |                           | comprehensive slas on general
           |                           | availability.

- SDKs for .NET ( for Table
  microsoft.Azure.Cosmosdb.table for Cosmos DB Table API using
  same APIs and signatures -Cosmos DB Table not yet in .NET core),
- Python (table+Cosmos DB), JAVA
  Node.JS (Bºclient Browser compatible!!º),
  powerShell (mStorageTable module),
  C++, Ruby and PHP.

table design ºdesign to be read-efficient:º - querying/read heavy applications: - think about the queries (especially latency sensitive ones) - specify pk = (partitionkey, rowkey) in queries. - consider duplicate copies of entities, ºstoring the same entity multiple timesº (with different keys) for efficient queriesº. - considerºdenormalizing dataº. store summary entities so that queries for aggregate data only need to access a single entity. - use compound key values: the only keys you have are (partitionkey/rowkey). for example, use compound key values to enable alternate keyed access paths to entities. - use query projection reducing amount of data transferred over the net by using queries that select only the needed fields. ºdesign to be write-efficientº: -ºdo not create hot partitionsº: choose keys that enable to spread requests across multiple partitions. -ºavoid spikes in trafficº: smooth over time. -ºdon't necessarily create a separate table for each type of entityº. when you require atomic transactions across entity types, you can store these multiple entity types in the same partition in the same table. ºdesign scalable and performant tablesº consider factors such as performance, scalability, and cost. Rºcounter-intuitive/wrong designs to people familiar with relational DDBBº - design differences reflect table service target of supporting billions of entities ("rows") with high transaction volumes. ex: table storing employee+department entities. partitionkey rowkey timestamp marketing 00001 2014-...:32z firstname lastname age email don hall 34 marketing 00002 2014...:34z firstname lastname age email jun cao 47 marketing dpmt 2014...:30z departmentname employeecount marketing 153 sales 00010 2014...:44z firstname lastname age email ken kwok 23 ^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ choice partitionkey,rowkey is to store complex data types in a fundamental to good table design. single property, you must use a partitiokey rowkey values are idxed JSON/xml/.... to create a clustered index to enable fast look-ups. however, the table service does not create any secondary indexes, soºpartitionkey and rowkey are the only indexed properties.º Bºa solution may consist of a single table that contains all º Bºentities organized into partitions, but typically a solutionº Bºhas multiple tables. º tables help you to logically organize entities, manage access with acls or drop entire table using a single storage operation. BºTABLE PARTITIONSº (account name, table name, partitionkey) identify the partition within the storage service where the table service stores the entity. Bºas well as being part of the addressing scheme for º Bºentities, partitions define a scope for transactions,º Bºand form the basis of how the table service scales. º node 1 ←→ 1+ partitions ^^ table service scales dynamically load-balancing partitions across nodes. table service can split the range of partitions serviced onto new different nodes. entity group transactions (batch transactions) table service entity group transactions (egts): - only built-in mechanism for performing atomic updates across multiple entities. sometimes referred . Rºwarn:can only operate on entities stored in same partitionº Rº(same partition key, different row key)º ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ºcore reason for keeping multiple entity types in same table/partitionº - egt limit: at most 100 entities. - if simultaneous egts operate on common entities processing can be delayed. - trade-off: more partitions increase scalability/load-balancing more partitions limits atomic transactions and strong consistency /---------------------------------------- |capacity --------------------+---------------------------------------- total capacity |500 tb account | --------------------+---------------------------------------- num. tables in |limited by capacity of storage account an Azure storage | account | --------------------+---------------------------------------- # partitions in | limited by capacity of storage account table | --------------------+---------------------------------------- # of entities | limited by capacity of storage account in partition | --------------------+---------------------------------------- size of an | up to 1 mb with 255 properties max individual entity | (including partitionkey/rowkey/timestamp) --------------------+---------------------------------------- partitionkeyize size| string up to 1 kb --------------------+---------------------------------------- size of the rowkey | string up to 1 kb --------------------+---------------------------------------- size of an entity | tx can include at most 100 entities and group transaction | payload must be less than 4 mb. | egt can only update an entity once. -------------------- ---------------------------------------- BºDESIGN FOR QUERYINGº typically,read scalability design is also also efficient for write operations. - design queries → design indexes note: with table service, it's difficult and expensive to change index design (partitionkey,rowkey) later. you can not add indexes a posteriory like in DDBBs. ex: table storing employee entities (timestamp not shown): ┌─────────────────────────────────────────┐ │column name data type│ ├─────────────────────────────────────────│ │partitionkey (department name) string │ ├─────────────────────────────────────────│ │rowkey (employee id) string │ ├─────────────────────────────────────────│ │firstname string │ ├─────────────────────────────────────────│ │lastname string │ ├─────────────────────────────────────────│ │age integer │ ├─────────────────────────────────────────│ │emailaddress string │ └─────────────────────────────────────────┘ fastest lookups: -ºpoint queryº (partitionkey, rowkey) used to locate entities. fastest for high-volume (and/or lowest-latency) lookups ex: $filter=(ºpartitionkeyº eq 'sales') and (ºrowkeyº eq '2') -ºrange queryº (partitionkey, rowkey-range).return 1+ entities. ex: $filter=partitionkey eq 'sales' and rowkey ge 's' and rowkey lt 't' Rºwarn: using "or" in rowkey filter results in partition scan not range query. $filter=... (rowkey eq '1'Rºorºrowkey eq '3') -ºpartitionº (partitionkey, non-key filter) ºscanº ex: $filter=partitionkey eq 'sales' and lastname eq 'smith' -ºtable scanº partitionkey Rºnot used :very inefficientº. ex: $filter=lastname eq 'jones' note: results with multiple entities return them ºsorted in partitionkey and rowkey orderº to avoid resorting the entities in the client, choose a rowkey that defines the most common sort order. Rºwarn:º these keys are 'string' values. to ensure that numeric values sort correctly, they must be string-represented with fixed length padded with zeroes: ex: 123 → '00000123'
authorization in Azure storage ºauthorize with shared keyº ºevery requestº against storage service must be authorized. exception: public BLOB or container or signed access. authorizing a request: - option 1:ºshared key authorization schemeº with the REST API. table service ver 2009-09-19+ uses same signature string as in previous versions of the table service. - option 2: (not detailed here) ºusing a connection stringºincluding the authentication information required for app to access data in an account at run time. connection strings can be configured to: - connect to the Azure storage emulator. - access an account. - access specified resources in Azure via a shared access signature (sas). - authorized request required headers: ┌────────────────┬─────────────────────────────────────────────────── | http header | | ----------- | | ºx-ms-dateº | (alternatively 'date' standard http header, | | with x-ms-date taking preference) | | coordinated universal time (utc) request timestamp | | storage servicesºensures that a request is no º | | ºolder than 15 minutesºby the time it reaches the | | service. | |- protects from replay attacks and others | | 403 forbidden returned otherwise | |- note: if 'x-ms-date' != "" constructºsignatureºwith | | 'date' == "" ├────────────────┼─────────────────────────────────────────────────── | authorization | if empty request is considered anonymous: | | only public-access BLOB or container/BLOB/queue/table | | for which a shared access signature has been provided | | for delegated access. | | | | authorization="sharedkey(lite) $accountname:$signature" | | ^^^^^^^^^ | | base64encode(hmacSHA256(request, account_key)) | | | | | | └────────────────┴─────────────────────────────────────────────────── ºsingature how-to:º - build depends on service version authorizing against as well as 'authorization scheme'. keep in mind: - verb portion of string is the uppercase http verb (get, put,...) - for shared key authorization for BLOB/queue/file each header included in the signature string may appear only once. - if any header is duplicated, the service returns status code 400 (bad request). - all standard http values must be included in the string in the order shown in the signature format, headers may be empty if they are not being specified as part of the request; in that case, only the new-line character is required. - all new-line characters (\n) shown are required within sign.string. - signature string includes canonicalized headers and canonicalized resource strings. - signature string format for shared key against table service doesn't change in any version and it's slightly different from requests against BLOB/queue service: - it does not include canonicalizedheaders. - date headerºis never emptyº eve if 'x-ms-date' is set. 'x-ms-date' and 'date' must be the same if both set. - stringtosign = verb + "\n" + content-md5 + "\n" + content-type + "\n" + date + "\n" + canonicalizedresource; - ver.2009-09-19+: all REST calls must include the 'dataserviceversion' and 'maxdataserviceversion' headers. ºestablishing a stored access policyº an stored access policy: - provides additional level of control over service-level shared access signatures (sas) on the server side. - allows toºgroup shared access signaturesºand to provide additional restrictions for signatures that are bound by the policy. - use-case: change start-time/expiry-time/permissions-for-signature, revoke already-issued permissions. - supported by: - queues - tables - file shares ← policy can be associated with shared access signature granting perm to: - share itself Bºorº - files contained in share - BLOB containers ← policy can be associated with shared access signature granting perm to: - container itself Bºorº - BLOBs contained in container ºcreating/modifying a stored access policyº -ºup to 5 access policiesºby container/table/queue. - call Bº"set acl"ºoperation for resource - request body specifies terms of the access policy ˂?xml version="1.0" encoding="utf-8"?˃ ˂signedidentifiers˃ ˂signedidentifier˃ ← corresponds 1←→ signed policy ˂id˃unique-64-char-value˂/id˃ ← "custom id", 64chars max ˂accesspolicy˃ ← (optional) params ˂start˃start-time˂/start˃ ˂expiry˃expiry-time˂/expiry˃ ˂permission˃abbreviated-permission-list˂/permission˃ ˂/accesspolicy˃ ˂/signedidentifier˃ ˂/signedidentifiers˃ note: tableºentity-range-restrictionsº º(startpk, startrk, endpk, and endrk)º can not be specified in policy. ºrevokeº stored access policy by deleting it or renaming the signed identifier (breaking association with existing signature/s) - call Bº"set acl"ºoperation again, passing only in the set of signed identifiers to maintain on the container. (empty body to remove all). º(cors) support for the Azure storage servicesº - Azure storage services, ver.2013-08-15 onward. (BLOB/table/queue/file services ) ^ vr 2015-02-21 onward - cors allows webapp under one domain to securely access resources into another domain, to skip "same-origin policy" restriction. - cors rules can be set individually for each storage services calling: - set BLOB service properties. - set file service properties. - set queue service properties. - set table service properties. Bºnote:º cors is not an authentication mechanism. Rºwarn:º cors is Rºnotº supported for premium storage accounts.
table service REST API ºtable services resourcesº resources available through REST API: - ºstorage accountº:Oºparent namespaceºfor table service. storage account 1 ←→ n tables - ºtablesº - ºentityº ºquery timeout and paginationº two types of query operations: -ºquery tablesº operation returns the list of tables within the specified storage account. it may be filtered. -ºquery entitiesºoperation returns a set of entities from specified table filtered according to request criteria. ºlimits:º - 1_000 max items at one time - 5 seconds max execution. - query must not cross partition boundary. - total time allocated to the request for scheduling and processing the query is 30 seconds, including 5 seconds for query execution. Bºnoteº: if limit is passed response headers will provide for Oºcontinuation tokensºto resume the query at the next item in the result set. continuation token header description x-ms-continuation-nexttablename return in query tables ops. hash of next table name x-ms-continuation-nextpartitionkey return in query entities ops. next part.key x-ms-continuation-nextrowkey return in query entities ops. next row key (may be null) return in query entities ops. ms.NET client library manual handling: 1st) cast result to queryoperationresponse object. after) continuation token headers can be accessed in headers prop. of instantiated object. - next "cloned"-queries can "continue" query by using request headers: - nexttablename - nextpartitionkey - nextrowkey Rºwarnº: for tx updates, operation may have succeeded on the server despite (30 secs timeout) error being returned. ºsample response headers and subsequent requestº date: mon, 27 jun 2016 20:11:08 gmt content-type: application/JSON;charset=utf-8 server: windows-Azure-table/1.0 microsoft-httpapi/2.0 cache-control: no-cache x-ms-request-id: f9b2cd09-4dec-4570-b06d-4fa30179a58e x-ms-version: 2015-12-11 content-length: .... Bºx-ms-continuation-nextpartitionkey:º1!8!u21pdgg- Bºx-ms-continuation-nextrowkey:º1!8!qmvuotk5 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ next request would be like: ...?ºnextpartitionkeyº=1!8!u21pdgg-&ºnextrowkeyº=1!12!qmvumtg5oa-- ºquerying tables and entitiesº Bºaddressing data-resources in queries follows odata proto.spec.º Oºbase_url="https://${storage_account}º ${base_url}/tables ← list tables to create/delete table, refer to the set of tables in the specified storage account. ${base_url}/tables(${t})← return single table ${base_url}/mytable() ← query entities in a table to insert/update/delete an entity, refer to that table directly within the storage account. this basic syntax refers to the ºset of all entitiesºin the named table ºSUPPORTED QUERY OPTIONSº system description query option ----------------------------------------------- $filter 15 max discrete comparisons eq gt ge lt le ne and not or ----------------------------------------------- $top return top n results ----------------------------------------------- $select initial subset. ver 2011-08-18 onward ----------------------------------------------- ☞ Bºquery parameters must be URL encoded:º - chars / ? : @ ⅋ = + , $ must be scaped. - ' must be represented as '' Ex: o'clock → o''clock Ex queries: - ${base_url}/customers()?$top=10 - ${base_url}/customers(partitionkey='partition01',rowkey='r_key01') └───────────primary key ──────────────────┘ alt2: specify pk as part of $filter ºCONSTRUCTING FILTER STRINGSº ☞keep in mind: - property name, operator, and constant value must be separated by URL-encoded spaces (%20) - filter string are case-sensitive. - constant value must be of same data type as property note: be sure to check whether a property has been explicitly typed before assuming it is of a type other than string. if property has been explicitly typed, its type is indicated within the response or returned entity. otherwise string type applied - enclose string constants in single quotes. - Example queries: ...tab01$filter=partitionkey%20eq%20'partition01'%20and%20rowkey%20eq%20'r_key01' ^^^^^^^^^^^^ ^^^^^^ └─────────────── filter by pk ──────────────────────────────────┘ ...tab01()?$filter=lastname%20eq%20'smith'%20and%20firstname%20eq%20'john' ^^^^^^^^ ^^ ^^^^^^^^^ ^^ ^ ^ └───── wildcard not supported ──┘ prefix match allowed through comparisions ..tab01()?$filter=age%20gt%2030 ^^^^ do not quote for numeric properties ..tab01()?$filter=isactive%20eq%true ^^^^ boolean filtering on datetime properties ...tab01()?$filter=activation%20eq%20datetime'2008-07-10t00:00:00z' ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ datetime value format ...tab01()?$filter=guidvalue%20eq%20guid'a455c695-df98-5678-aaaa-81d3367e5a34' ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ guid value format BºINSERTING AND UPDATING ENTITIESº - include with the request anºodata atom or odata JSON entityº specifying the properties and data for the entity. note:ºupdate-entity-operationºreplaces current entity. º merge-entity-operationºupdates properties but does not replace it, it creates a new one. - Ex atom feedº Rºwarnº: suported up to ver.2015-12-11. ^^^^^^^^^^^^^^ JSON for this ver. onward ( m:type indicates type value type for a given property) ˂?xml version="1.0" encoding="utf-8" standalone="yes"?˃ ˂entry xmlns:d="" xmlns:m=" a" xmlns=""˃ ˂title /˃ ˂author˃ ˂name /˃ ˂/author˃ ˂id /˃ ˂content type="application/xml"˃ ˂m:properties˃ ← entity's property defs ˂d:address ˃mountain view˂/d:address˃ ˂d:age m:type="edm.int32" ˃23˂/d:age˃ ˂d:amntdue m:type="edm.double" ˃200.23˂/d:amountdue˃ ˂d:code m:type="edm.guid" ˃c9da6455-...˂/d:customercode˃ ˂d:since m:type="edm.datetime"˃2008-07-10t00:00:00˂/d:customersince˃ ˂d:isactiv m:type="edm.boolean" ˃true˂/d:isactive˃ ˂d:ordernu m:type="edm.int64" ˃255˂/d:numoforders˃ ˂d:binary m:type="edm.binary" m:null="true" /˃ º˂d:partitionkey˃mypartitionkey˂/d:partitionkey˃º º˂d:rowkey˃myrowkey1˂/d:rowkey˃º ˂/m:properties˃ ˂/content˃ ˂/entry˃ ºexample JSON feedº { "address":"mountain view", "age" :23, "amntdue":200.23, "code@odata.type" :"edm.guid" , "customercode":"c9da6455-..", "since@odata.type":"edm.datetime", "customersince":"2008-07-10t00:00:00", "isactive":true, "ordernu@odata.type":"edm.int64" , "ordernu":"255", º"partitionkey":"mypartitionkey",º º"rowkey":"myrowkey"º }
Cosmos DB
Cosmos DB overview
ºAzure Cosmos DBº
DDBB service native to Azure providing high-performance database
Bºregardless of your selected API or data modelº offering
multiple APIs and models (key-value, column-family, document, graph)

ºcore functionalityº

ºglobal replicationº
- turnkey global distribution automatically replicates data
  to other Azure datacenters across the globe.

- consistency levels:
 strong    | reads guaranteed to be visible across replicas
           | before writes is fully committed across all
           | replicas.
           | write operation performed on primary database,
           | replicated to the replica instances.
           | write is committed (and visible) on the primary
           | only after it has been committed and confirmed
           | by all replicas.
 bounded   | similar to the strong level
 staleness | but you can configure how stale
           | documents can be within replicas.
           | staleness refers to the quantity
           | of time (or the version count) a
           | replica document can be behind the
           | primary document.
 session   | it guarantees that all read/write
           | operations are consistent within a user
           | session. within user session, all
           | reads and writes are monotonic and
           | guaranteed to be consistent across
           | primary and replica instances.
 consistent| this level has loose consistency but
 prefix    | guarantees that when updates show up in
           | replicas, they will show up in the
           | correct order (that is, as prefixes of
           | other updates) without any gaps.
 eventual: | writes are readable immediately, and replicas
           | are eventually consistent with the primary.
           | commits any write operation against the
           | primary immediately. replica
           | transactions are asynchronously handled
           | and will eventually (over time) be
           | consistent with the primary. this tier
           | has the best performance, because the
           | primary database does not need to wait
           | for replicas to commit to finalize it's
           | transactions.

ºchoose the right consistency level for your applicationº
ºSQL API and table APIº
- for manyºreal-world scenarios, session consistency is optimalº
- ifºstronger consistency that sessionºis required,
  andºsingle-digit-millisecond latency for writesº
  apply, bounded staleness is recomended.
- if eventual consistency is needed, consistent-prefix is
- for less strict consistency guarantees consistent-prefix
  is still recomended.
- if highest availability and lowest latency, then use
  eventual consistency level.

ºconsistency guarantees in practiceº
- stronger consistency guarantees may be obtained in practice.
-Bºconsistency guarantees for a read operation correspond to        º
 Bºthe freshness and ordering of the database state that you requestº
 Bºread-consistency is tied to the ordering and propagation ofº
 Bºthe write/update operations.º

 - inºbounded-stalenessº, Cosmos DB guarantees that
  ºclients always read the value of a previous writeº,
   with aºlag bounded by the staleness windowº.
 -ºstrong ==  staleness window of zeroº:
   clients are guaranteed to read latest committed value
   of the write operation.
 - for remaining three consistency levels, staleness window
   is largely dependent on app workload. for example,
Bº if there are no write operations on the database, a readº
Bº operation with eventual, session, or consistent prefix  º
Bº consistency levels is likely to yield the same results  º
Bº as a read operation with strong consistency level.      º

   you can find out the probability that clients get strong
   and consistent reads for workloads by looking at the
 Bºprobabilistic bounded staleness (pbs)º metric
   exposed in the BºAzure portalº:
   - it shows how eventual is Cosmosdb configured eventual consistency.
     providing insights into how often clients get a stronger
     consistency than the consistency level currently.
    ºprobability (measured in milliseconds) of getting   º
    ºstrongly consistent reads for a combination of writeº
    ºand read regionsº

ºfive consistency modelsº natively supported by the a.Cosmos DBºSQL APIº
(SQL API is default API):
- native support for wire protocol-compatible APIs for
  popular databases is also provided including
 ºmongodb, cassandra, gremlin, and Azure table storageº.
  Rºwarn:º these databases don't offer precisely defined consistency
    models or sla-backed guarantees for consistency levels.
    they typically provide only a subset of the five consistency
    models offered by a.Cosmos DB.
- for SQL API|gremlin API|table API default consistency level
  configured on theºa.Cosmos DB accountºis used.

comparative cassandra vs Cosmos DB:
cassandra 4.x       Cosmos DB           Cosmos DB
                    (multi-region)      (single region)
one, two, three     consistent prefix   consistent prefix
local_one           consistent prefix   consistent prefix
quorum, all, serial bounded stale.(def) strong
                    strong in priv.prev
local_quorum        bounded staleness   strong
local_serial        bounded staleness   strong

comparative mongodb 3.4 vs Cosmos DB

mongodb 3.4         Cosmos DB           Cosmos DB
                    (multi-region)      (single region)
linearizable        strong              strong
majority            bounded staleness   strong
local               consistent prefix   consistent prefix

ºAzure Cosmos DB supported APIsº
Bºthe underlying data structure in Azure Cosmos DB is a data modelº
Bºbased on atom record sequences that enabled Azure Cosmos DB to  º
Bºsupport multiple data models.                                   º
because of the flexible nature of atom record sequences,
Azure Cosmos DB will be able to support many more models and
APIs over time.

-ºmongodb APIº acts massively scalable mongodb service.
  it is compatible with existing mongodb libraries, drivers, tools, and apps.

-ºtable APIº acts as a key-value database service with premium
  capabilities (automatic indexing, guaranteed low latency,
  global distribution).

-ºgremlin APIº acts as a fully managed, horizontally scalable graph
  database service to build and run applications that work with
  highly connected datasets supporting open graph APIs
 (based on apache tinkerpop spec, apache gremlin).

-ºapache cassandra APIº acts as a tglobally distributed apache
  cassandra service compatible with existing apache cassandra
  libraries, drivers, tools, and applications.

-ºSQL APIº is a JS+JSON native API providing query capabilities rooted
  in the familiar SQL query language.
  it supports the execution of javascript logic within the database
  in the form of stored procedures, triggers, and user-defined functions.

ºmigrating from nosqlº
cassandra API supports cqlv4
mongodb   API supports mongodb v5.

for successful migration keep in mind:
- do not write custom code. use native tools (cassandra shell,
  mongodump, and mongoexport).

- Cosmos DB containers should be allocated prior to the
  migration with the appropriate throughput levels set.
  many of the tools will create containers for you with
  default settings that Rºare not idealº.

- prior to migrating, you should increase the container's
  throughput to at least 1,000 request units (rus) per second
  so that import tools are not throttled.
  throughput can be reverted back after import is complete.


  account                                          ºRESOURCE HIERARCHYº
  └→n database
      └→n collection container == collection|graph|table *1
          └ n ──┬─────────┬────────┬──────────┬─────────────────┐
             ┌──┼───┐ ┌───↓────────↓──────────↓────────┐    ┌───↓─────┐
             │items │ │specs  triggers   user─defined  │    │conflicts│
             └──────┘ │                  functions     │    └─────────┘
  *1:you can also scale workloads across collections,
    if you have a workload that needs to be partitioned,
    you can scale that workload by distributing its
    associated documents across multiple collections.
   ºcollection escalability types can beº
  ( can be defined at creation in Azure portal)
   - fixed    : max.limit of 10 gb and 10_000 ru/s throughput.
   - unlimited: to create it, you   must specify abºpartition keyº
                and a minimum throughput of 1_000 ru/s.

                (logical)         (physical)          logical
                collection 1 ←→ n partitions  1 ←→ n  partition
                                  └───┬────┘          ^^^^^^^^^
                                      │               data store
                                      │               associated with
                                      │               partition key val
             - fixed amount of reserved solid-state drive (ssd)
               combined with a variable amount of compute resources
               (cpu and memory), replicated for high availability.
               it is an internal concept of Azure Cosmos DB.
               they are transient.
             - number of physical partitions is determined by Cosmos DB
               based on the storage size and throughput provisioned
               for a container or set of containers.
               (similar to sharding pattern)

  account    it is associated with a set of databases and a fixed amount
             of large object (BLOB) storage for attachments.
            ºone or more database accounts can be created per Azure subscription.º

  database:  logicalºcontainer of document storage partitioned across collectionsº.
             it is also aºusers containerº

  collection container of JSON documents and associated javascript
  container  application logic.  collections can span one or more
             partitions or servers and can scale to handle practically
             unlimited volumes of storage or throughput.

  document   user-defined JSON content. by default, no schema needs to
  (item)     be defined nor do secondary indexes need to be provided for
             all the documents added to a collection.

  stored     application logic written in JS and registered with a
  procedure  collection and executed within the database engine as a
  (sproc)    transaction.

  trigger    application logic written in JS executed
             before or after either an insert/replace/delete op

  user       application logic written in JS.  enabling developers to model a
  defined    custom query operator and thereby extend the core SQL API query
  function   language.

BºCollections in Cosmos DB SQL APIº

  └☞ºDDBBs are essentially containers for collections.º
    collections: place for individual documents.
                 it automatically grows and shrinks.
  └ each collection is assigned a maximum ºthroughput valueº
  └  alternatively, you can assign the throughput
    at the database level and share the throughput values
    among all collections.
  └ if a set of documents needs throughput beyond the
    limits of an individual collection, they can be
    distributed among multiple collections.
    each collection has its own distinct throughput level.

  └ you can also scale workloads across collections,
    if you have a workload that needs to be partitioned,
    you can scale that workload by distributing its
    associated documents across multiple collections.

  └ Cosmos DB SQL API includes a client-side partition
    resolver that allows you to manage transactions and
    point them in code to the correct partition based on
    a partition key field.

  └ºcollection typesº
    (can be defined at creation in Azure portal)
    - fixed    : max.limit of 10 gb and 10_000 ru/s throughput.
    - unlimited: to create it, you must specify a Bºpartition keyº
                 and a minimum throughput of 1_000 ru/s.
                 (otherwise it will not automatically scale)
    - to migrate the data fixed → unlimited, you need to use the
      data migration tool or the change feed library.

    -  Cosmos DB containers can also be configured to share throughput
       among the containers in a database.

C# how-to ºmanage collections and documents by using the microsoft .NET SDKº Cosmos DB SQL API pre-setup: -ºmicrosoft.Azure.documentdb.coreº nuget package using microsoft.Azure.documents; ← imports using microsoft.Azure.documents.client; ... documentclient docclient01 = new documentclient(new URI("[endpoint]"), "[key]"); ^ Cosmos DB account endpoint -any resource reference in the SDK needs a URI. urifactory static helper methods will be used for common Cosmos DB resources. ex. collection URI: URI collectionuri = urifactory. createDocumentCollectionURI( databasename, collectionname ); var document = new { // ← any c# type allowed in SDK firstname = "alex", lastname = "leh" } await docclient01. createdocumentasync(collectionuri, document); // ← insert to query the DB: // alt 1: sqlqueryspec: var query = client.createdocumentquery˂family˃( collectionuri, new sqlqueryspec() { // ← perform SQL query querytext = "select * from f where (f.surname = @lastname)", parameters = new sqlparametercollection() { new sqlparameter("@lastname", "andt") } }, defaultoptions ); var families = query.tolist(); // ← result // alt 2: c# language-integrated query (linq) // linq expressions will be automatically translated // into the appropriate SQL query: var query = client.createdocumentquery˂family˃(collectionuri) .where (d =˃ d.surname = "andt") .select(d =˃ new { name =, city = d.address?.city) .asdocumentquery(); var families = query.tolist();
SQL ddbb
Azure SQL overview
- supports relational data, JSON, spatial, and xml.
- columnstore indexes for extreme analytic analysis and reporting
- in-memory oltp for extreme transactional processing.
- dynamically scalable performance within
  two differentºpurchasing models:º
  - ºvcore-basedº:
  - ºdtu-basedº  :
-Bºcode base shared with microsoft SQL server database engineº
 ( newest capabilities of SQL server released first to
   SQL database, then to SQL server itself)

Gº└ SQL server on VMs (IaaS):º
    - Cons: patching, backups, ... is manual.
      Pros: full control over the database engine,
            (switch recovery model simpler|bulk-logged,
             pause/start engine at will,...)
    - Pice options include:
      -ºpay-as you-goº:
        - SQL server license included with SQL server image
        -ºreuse existing licenseº.

Gº└ hosted service (PaaS) Azure SQL database:º
    - fully-managed based on latest stable
      enterprise edition of SQL server.
    - built on standardized hardware and software
      owned, hosted, and maintained by microsoft.
    - ºpay-as-you-goº
    - options to scale up or out
    - additional features not available in SQL server.
      (built-in intelligence and management)

    Bº└  logical serverº:
        - managed by aºlogical serverº:
        - most of database-scoped features of SQL server are available.
        - Single database or elastic database pool
          share resources.

    Bº└  managed instancesº:
        (part of a collection of ...)
        - shared resources for databases  and additional
          instance-scoped features.
        - support for☞ºdatabase migration from on-premisesº.
        - all of the paas benefits of Azure SQL database but
          adds capabilities of SQL on VMs:
          - native virtual network (VNET)
          - nearº100% compatibility with on-premises SQL serverº.

   SQL server on VM        Azure SQL database     Azure SQL database
                           (managed instance)     (logical server)
   full control of         high on-premises       most common SQL server
   SQL server engine       compatibility          features available.

   up to 99.95%            99.99% availa.         99.99% availa.
   availability.           guaranteed             guaranteed

   full parity with        built-in backups,      built-in backups,
   on-premises             patching, recovery.    patching, recovery.
   SQL server.

   fixed, well-known       latest stable          latest stable
   engine version.         engine                 engine

   easy migration from     easy migration from    resources (cpu/storage)
   on-premises.            SQL server             assigned individually to
                                                  each DDBB.
   private ip address      private ip address
   within Azure vnet.      within Azure vnet.

                           built-in advanced      built-in advanced
                           intelligence⅋securit   intelligence⅋securit

                           online (cpu/storage)   online (cpu/storage)
                           change                 change

   Can share VM resources
   with application code

   manual manage backup    minimal number of       migration from
   and patches             SQL-server features     SQL server might be hard
                           not available.
   manually implement                              some SQL server features
   HA solution.                                    are not available.

                           compatibility with      compatibility with
   downtime while changing SQL server version can  SQL server version can
   resources(cpu/storage)  be achieved only using  be achieved only using
                           database compatibility  database compatibility
                           levels.                 levels.

                           no guaranteed exact     no guaranteed exact
                           maintenance time        maintenance time
                           (nearly transparent)    (nearly transparent)

                                                   private ip address
                                                   cannot be assigned
                                                   (firewall rules
                                                    still available).

BºDDBB: Transactionally Consistent copy HOW-TOº
Bº└ copy target params include:º
    - same/different destination server
    - same/different service tier
    - same/different compute size

  ☞ NOTE: automated database backups are used when creating a DDBB copy.

  Gº└ Copying to same logical serverº:
      - same logins persists.
      - ºSecurity principal used to run the copy becomes theº
        ºowner of the new DDBBº
      - all DDBB users, user-permissions, and user-security-identifiers
        (SIDs) are copied to DDBB copy.

  Gº└ Copying to different logical serverº:
      - ºSecurity principal used to run the copy becomes theº
        ºowner of the new DDBBº and is assigned a
        new security identifier (SID).
      -  if using contained-in-DDBB-users for data access,
         ensure that both primary and secondary DDBBs always
         have the same user credentials, warranting access
         with same credentials.
      -  if using A.AD, managing credentials in the copy is
         not needed, Rºhowever, login-based access might not workº
       Rºin new copy because logins do not exist on destination server:º.
         Only the login initiating the copy (new owner in copy) can work
         before remapping users. To resolve logis:
         - after copy is online, use "alter user" statement to remap
           the users from the new database to logins on the destination
         - all users in the new database retain permissions.

Bº└ DDBB copy from A.Portalº
     portal → ... open database page → click "copy"

Bº└ DDBB copy with PowerShellº
     $ new-Azurermsqldatabasecopy
         -resourcegroupname "myresourcegroup" `
         -servername $sourceserver `
         -databasename "mysampledatabase" `
         -copyresourcegroupname "myresourcegroup" `
         -copyservername $targetserver `
         -copydatabasename "copyofmysampledatabase"
Entity Framework "CRUD"
- object-relational mapperºlibrary for .NETº
- Reduce relational←→OO impedance|mismatch
- goal: - enable developers interact with relational databases
          using strongly-typed .NET objects representing
          the application's domain.
        - eliminate repeated "plumbing" code.

Bºentity framework core vs entity frameworkº
- Entity Framework core (EF core) is a rewrite of
  Entity Framework ("full") targeting .NET standard.
  ☞ recomended over the "full" non-core old one.

Bºentity framework providersº (2020-03)
  - SQL server    - mySQL/mariaDB        - DB2
  - SQLite        - myCat server         - Informix
  - PostgreSQL    - SQL server compact   - Oracle
  - firebird      - MS access
  also:Gºin-memory provider: useful to test componentsº

  - SQL server provider:
    OOSS project, part of Entity Framework Core

  - MySQL:
    - MySQL official plus third-party groups:
      - pomelo.entityframeworkcore.mysql

  - PostgreSQL:
    - multiple third-party libraries:
      - npgsql.entityframeworkcore.postgresql


BºModeling a DDBB with EF-coreº
  └ conventions:
    - based on the shapes of entity model(classes).
    - providers might also enable a specific configuration

  └ 1) create model mapping:
       (entities,relationships) ←→ DDBB tables
       → Initial DDBB table:
       · BLOGS TABLE
       · -------------------------------------
       ·ºblogidºOºURLº         Qºdescriptionº
       · ------   ------------   -----------
       · 1        /first-post    first post ...
       · 2        /second-post   null
       └ → Initial OºPOCOº: (No Framework Related)
         · public class OºBlogº {
         ·   public intºblogIDº { get; set; }
         ·    public stringOºURLº { get; set; }
         ·    public stringQºdescriptionº { get; set; }
         · }
         └ → Initial In-memory "Database":
           · public class blogDataBase {
           ·      public IEnumerable˂Blog˃  ← GºDBSet˂˃º methods allows to query the DDBB using
           ·           blogs { get; set; }   ºlanguage-integrated queryº(LINQ), by implementing
           · }                                IEnumerable˂˃ interface, giving access to many of
           ·                                  the existing LINQ queries.
           └ → Mark classes as models of database,
               by including a type metadata that
               entity framework can interpret:

               └ →Bºalt 1: fluent APIº
               ·    - ☞ no need to modify entity classes.
               ·    - ºhighest precedenceº overriding conventions and
               ·       data annotations.
               ·    - override "onModelCreating" in derived context class
               ·    - use Bº"modelBuilder API"º to configure the model.
               ·      protected override void
               ·      onModelCreating(BºmodelBuilder MBº)    ← ☞ GºConventionº: types mentioned in
               ·      {                                            "onModelCreating" are included
               ·        BºMBº.entity˂OºBlogº˃()                    in the model
               ·             .haskey    (c =˃ c.ºblogIDº)
               ·             .property  (b =˃ b.OºURLº)
               ·             .isrequired()
               ·             .property  (b =˃ b.Qºdescriptionº);
               ·      }
               └ →Bºalt 2: data annotationsº
                    - higher precedence over conventions. ex:
                      public class blog {
                          public int ºblogIDº          { get; set; }
                          public string OºURLº         { get; set; }
                          public string Qºdescriptionº { get; set; }

Bº implementationº
Bº└ primary interaction point with frameworkº
     (also known as "context class"). used to:
     - write, then execute queries.
     - convert query-results to entity instances.
     - track and persist-back changes made objects.
     - bind entities in memory to UI controls.

    - create a model (Previous section)

Bº└ recommended way to work:º
    - define a Gºderived class from DBContextº
     ºexposing DBSet-properties representing collectionsº
      of entities in the context.
      public class blogContext : GºDBContextº {
        public GºDBSetºOº˂blog˃º                       ← ☞ GºConventionº, types exposed in
               blogs { get; set; }                         DBSet props are included in
      }                                                    the model

   ☞ GºConventionº: any types found recursively exploring
                 the navigation properties of discovered
                 types are also included in the model.

BºQuerying DDBBsº
  - Ex: load all data from table:
    list˂Blog˃ allBlogs =
        context.Gºblogsº.tolist();             ← get all table

    IEnumerable˂blog˃ someblogs =
        context.Gºblogsº.where                 ← filter
           (b =˃ b.OºURLº.contains("dotnet"))

    Blog blog01 = context.Gºblogsº.single      ← get single instance
           (b =˃ b.ºblogidº == 1);               matching a filter

  -Bºwhen calling LINQ operators, Entity Framework builds up an in-memoryº
   Bºrepresentation of the queryº.
     - query is sent to the database on-demand, when results are
       being consumed. (iterating result, calling tolist,toarray,
       single, or count).

Bºdata binding the results of a query to a uiº
  - EF protects from  SQL injection.
  - EFRºdoes not validate inputº.
BLOB storage
BLOB overview
- optimized for storingºmassive amounts of UNstructured dataº
  (files, logs, backups, ...).
- object access via (http/https) with A.Storage REST API,
  cli/PowerShell, client library (.NET, Java, node.JS,
  python, go, php, and ruby).

-ºA.DATA LAKE Storage gen2º
  - Microsoft's enterprise big data analytics solution for cloud.
  - It offers a hierarchical file system as well as the
    advantages of BLOB storage (low-cost, tiered storage,
    HA, strong consistency, disaster recovery).

BºBLOB storage resources typesº

  │storage│ 1 ←-----→ N   │container│   1 ←-------------→ N  │BLOB│
  │account│                ^                                  ^
   ^                       │                                  │
 - unique namespace        - names are lowercase              │
   in Azure for data       - organize set of BLOBs            │
   endpoint for stor.        in hierachies                    │
   account will be:          "a la file─system"               │
   https://${sto_acct}                  │
  GºTypesº  ──────────────────────────────────────────────────┘
  text and binary data, up to about 4.7 TB, ← In emulator BLOBs
-GºAPPEND BLOBSº:                             are limited to 2 gb max.
  - made up of blocks (like block BLOBs)      │
  - optimized for append operations.          │
    (logging,..)                              │
-GºPAGE BLOBSº:                               │
  - optimized random access up to 8 TB.   ←───┘
  - Used to store Virtual HD(VHD) files
    in VMs.

- all BLOBs types reflect committed changes immediately.
- each version of the BLOB has aºunique "etag"º,
  that can be used to assure new changes applied to
  a specific instance of the BLOB.
-ºall BLOBs can be leased for exclusive write accessº.
  only calls that include the current lease ID will
  be allowed to modify the (block in block BLOBs) BLOB.

BºMoving data to A.Storage.º
Bº└ storage data movement .NET libraryº:
    - move data between A. storage services.
Bº└ azcopyº:
    - .NET cli tool
    ─ copy  to/from:  │storage│ ←─→ │container│ ←─→ │BLOB│ ←→ │File-share│


    $ azcopy /source:$source /dest:$destination [options]  ← Copy BLOB

    $ azcopy                                               ← Download BLOB
          /dest:c:\myfolder /sourcekey:key /pattern:"abc.txt"
                                         - match prefix
                                         - if abstent download all BLOBs
                                           (add /s flag)
                                         ☞ remember, no folder hierarchy exists.
                                           entire BLOB-path constitutes the name.

    $ azcopy \                                           ← copy BLOBs
      /source:https://myaccount.BLOB.${CORE}/container01   from container01
        /dest:https://myaccount.BLOB.${CORE}/container02     to container02
      /sourcekey:key /destkey:key \

    $ azcopy                                             ← Copy BLOB/s
      /source:https://account01.BLOB.${CORE}/containerA    from account01
        /dest:https://account02.BLOB.${CORE}/containerB      to account02
      /sourcekey:key1 /destkey:key2
    $ azcopy                                             ← Copy BLOB/s
      /source:https://account01.file.${CORE}/myfileshare/  from file-share
        /dest:https://account02.BLOB.${CORE}/mycontainer/    to BLOB-container
      /sourcekey:key1 /destkey:key2

    - NOTE: azcopy is done asynchronously by default.
      "/synccopy" option ensures that the copy
      operation gets a consistent speed by
      downloading the BLOBs to copy from the
      specified source to local memory and
      then uploading them to the BLOB storage
Bº└ A.Data factoryº:
    ─ copy data to/from │BLOB│ by using account-key,
      shared-access-signature, service-principal or managed identities
      for A.resource authentication.
Bº└ BLOBfuseº: linux vfs driver for BLOB storage.

    │ On premise │ → │SSDs│ → Microsoft → │ Storage │
    │ Data       │    ^        Upload     │ Account │
                      by Microsoft
    │ Storage │ → Microsoft → │ HDs │ → │ On premise │
    │ Account │   Download      ^       │ Data       │
                              by Client

Gº└ hot  storage:º                                   Bº└ BLOB storage              account
    - optimized for   frequently accessed data.      Bº└ general purpose v2 (gpv2) account
Gº└ cool storage:º                                       - new pricing structure for BLOBs, files,
    - optimized for infrequently accessed data.            and queues, and other new storage
    - accessed and stored for at least 30 days.            features as well.
      (short-term backup/recovery datasets)              RºWARNº: some workloads can be more
    - must tolerate slightly lower availability                   expensive on gpv2 than gpv1.
      but still requires high durability and similar              Check before switching.
      time-to-access and throughput as hot.              - previous accounts types allow to specify
    - slightly lower availability SLA and                  default storage tier at account level
      lower storage cost and Rºhigher access costsº        as hot or cool.
Gº└ archive storage:º
    - optimized for  rarely accessed and stored for
      at least 180 days with flexible latency
      requirements ("hours").
    - lowest storage cost andRºhighest access costsº
    Rºonly available at BLOB(vs storage account) levelº
    - data here is offline/cannot be read, copied,
      overwritten, modified or "snapshoted" (metadata is
      online and can be read). change to another tier to
      make use of it. "rehydratation" can take up to 15 hours.
      large BLOB sizes recommended for optimal performance.

☞ to manage and optimize costs it's helpful to organize data
  based on attributes like:
  - frequency-of-access
  - planned retention period.

Bºblock BLOBsº
  - efficient upload of large BLOBs.
  - each block is identified by "block id".
  - Block Commiting:
    - Each commit creates a new BLOB version.
    - Blocks need to be commited (after upload) to be part
      of the BLOB version Until commited|discarded
      block-status is "uncommited".
    - commitment-list-order (vs upload time order) determines
      the real order of blocks in block.
    - uncommitted blocks can be replaced by new blocks without
      affecting the BLOB version.
    - uncommited expiration time: 1 week, discarded otherwise.
    - Blocks not included in commitment are automatically discarded.
    - commit-lists with repeated blocks will insert the same block
      in different offsets in final BLOB.
    - commitments operation fails in any commit-listed block is not found.
    - commitments overwrites the BLOB's existing properties and metadata.

  - BLOB Limits:
    - blocks can be of different size.
      up to 100 MB max    per Block  (up to 4 MB using REST API ˂= 2016-05-31)
    x Up to 50,000 blocks per BLOB.
      Up to  ~ 4.75 TB TOTAL BLOB size

  - Upload Limits:
    - BLOBs less than 256 MB (64 MB ˂2016-05-31),
      can be uploaded  with a single write operation.
      - Storage clients default to 128 MB max single upload,
        tunnable in BLOBRequestOptions.singleBLOBUploadThresholdInBytes
        (larger BLOBs require to split file into blocks)
      - BLOBRequestOptions.ParallelOperationThreadCount
        tune the number of parallel threads/uploading-blocks
      - Up to 100_000 uncommitted blocks per BLOB,
        Total size of uncommitted blocks ˂= 200_000? MB.

  - optional md5-hash for block-verification available in API.

  -ºBlock IDsº:equal-length strings defined in client.
    - usually base-64 encoding used normalize strings into
      equal lengths (pre-encoded string must ˂= 64 bytes).

  - Writing a block to a non-existing BLOB creates a new zero-lengh
    BLOB with uncommitted BLOBs.
    - Discarded automatically after 1 week in no commit is done.

Bºpage BLOBsº
  - collection of 512-byte pages optimized for random read/write.
  - Commit is automatic on page-write.
  - maximum page size is setup and BLOB creation.
  - writes/updates is done uploading 1+(page,512-byte-aligned offset)
  - overwrite is limited to 4 MB maximum. (4 x 1024 x 2 pages)
  - max size: 8 TB.

Bºappend BLOBsº
  - Comprised of blocks optimized for append-operations
    adding to the end of the BLOB only.
  -Rºupdating/deleting existing blocks is not supportedº.
  - unlike a block BLOB, block-ids are not exposed.
  - Limits:
    - each block can be of different size
      up to 4 MB          per Block
      up to 50_000 blocks per BLOB
      Up to  =~ 195 GB

Bºshared access Sign.(SAS)º
  - BºSAS:URI granting restricted access rights containers,º
      BºBLOBs, queues, and tables for a specific time intervalº
      Bºincorporatingº Gºall grant-information necessaryº
        -Gºvalidity intervalº.
        -Gºpermissions grantedº.
        -Gºtargeted (shared) resourceº.
        -Gºsignature(to be validatedº
         Gºby storage services)º.

      EX: SAS URI providing read/write permissions to a BLOB.
      $base_url/sasBLOB.txt?           ← BLOB URI (https highly recommended)
          sv=2012-02-12⅋               ← storage services version
          st=2013-04-29t22%3a18%3a26z⅋ ← start      time (iso-8061 format)
                                                         (empty == "now")
          se=2013-04-30t02%3a23%3a26z⅋ ← expiration time
          sr=b⅋                        ← resource type = BLOB.
          sp=rw⅋                       ← permissions granted
          base64(sha256-hmac(message_string, shared_key)

     º" providing a client with a SAS, it canº
     ºaccess targeted resourceº
    BºWITHOUT sharing storage-account key with itº
      └ Valet key pattern:
        lightweight service authenticates the client as needed,
        then generates an SAS.

  - BºSAS stored access policiesº
    - SAS can Gºtake one of two formsº:
      -ºAD hoc SASº: start/expiration time and permissions
                     are all specified on the sas URI.
                     - This SAS can be created on a container,
                       BLOB, table, or queue.
      -ºSAS with a stored access policyº:
        -  stored-access-policy is defined on aºresource-containerº
           for BLOBs/Tables/Queues, and re-used to manage constraints
           for 1+ SASs/resources.
        - When associating SAS to stored-access-policy,
          the former inherits the constraints —start/expiration time,
          and permissions- of the later.
          - It allows to revocate permissions for non-expired SASs.
          RºThis is not possible for AD hoc SASsº. SASs can only
            be invalidated by regenerating the storage-account keys.
BLOB events
Bº eventsº
Bº└ºOºeventsº triggered on BLOB creation/deletion,
    (reliably) pushed OºA.Event Gridº, that (reliable) delivers
    to Oºsubscribersº(Functions, Logic apps, custom http listeners,...)
    with rich retry policies and dead-letter delivery.

    ºBLOB storageº     ┐       ┌─────┐ event      │ A.Functions
     resource groups   ┤topics │event│ subscript. │ A.Logic apps
     Azure subscription┼──────→┤grid ├───────────→┤ A.Automation
     event hubs        ┤       └─────┘            │ Webhooks
     custom topics     ┘
     ^^^^^^^^^^^^^^^^^                              ^^^^^^^^^^^^
     event publishers                               event handlers

Bº└ Available in account-types:º
    -ºgeneral-purpose v2º storage
    -ºBLOBº               storage:
      - Specialized storage account for BLOBs.
      - similar to general-purpose storage accounts and
        share all durability, availability, scalability,
        and performance features, including 100% API
        consistency for block-BLOBs and append-BLOBs.

Bº└ BLOB storage event types:º
    - fired on BLOB creation/replacement
                                     ('putBLOB', 'putBlockList', 'copyBLOB' OPs)

    - fired on BLOB deletion
      └───────┬────────┘             ('deleteBLOB' OP.)
      Univocally identify the
      event as BLOB-storage

Bº└ filtering events:º
  - event subscriptions can be filtered based:
    - ºevent typeº|BLOBdeleted)
    - ºcontainer nameº
       ☞Remember: │storage│ 1 ←→ N │container│ 1 ←→ N │BLOB│
                  │account│         ^
                                    organize BLOBs in hierachies
                                    a la F.S.

    - ºBLOB nameº

  - filters can be applied at event-subscription or later on.
  - Event grid Gºsubject filtersº work based on
  Gº"begins with"º and Gº"ends with"º matches.
  - BLOB storage events Gºsubjet-formatº:

    └-------match all*1     storage events
    └-------match container storage events ---------┘
    └-------match blob      storage events ----------------┘
          and applying ºsubjectBeginsWithºfilter
          *1: all = all in storage-account

        ☞ Use somethingºsubjectEndsWith (".log") to
          refine filtered event

BºPractices for consuming eventsº
 - multiple subscriptions can be configured to route events to
   the same event handler, it is important ºnot to assumeº
  ºthat events are from a particular sourceº
   but toºcheck the topicºof the message to ensure
   that it comes from the storage-account you are expecting.
 - check eventType is the expected one.
   (do not assume defaults)
 - messages can arrive out-of-order:
   - use "etag" field to understand if information about
     objects is still up-to-date.
   - use sequencer fields to understand the order of
     events on any particular object.
   - use 'BLOBtype' field (blockBLOB|pageBLOB) to identify
     operations allowed.
   - The 'URL' field in cloud(block|append)BLOB constructors.
   - ignore non-known (reserved) fields.
daily use
Bºset/get properties and metadata using RESTº
Bº└º(custom) metadata is represented as HTTP headers, that
    can be set along a container|BLOB create-request,
    or explicitely on an independent request for an
    existing resource.
Bº└ºmetadata header format
    ver.2009-09-19+ must adhere to C# identifiers naming rules
    - repeated metadata headers returns HTTP error code 400
    - metadata is limited to 8 KB max size.

Bº└ºOperations supported:
  - Metadata can overwrite existing keys.
    URI syntax:
    - GET/HEAD
        "          "      "    "     "          "        /myBLOB?comp=metadata

    - PUT  ←  RºWARNº: without any headers clears all existing metadata

    - Standard HTTP headers supported:
    ───────────────    ────────────────────────────────────
      CONTAINERS                    BLOBs
    ───────────────    ────────────────────────────────────
    · etag             · etag            · content-encoding
    · last-modified    · last-modified   · content-language
                       · content-length  · cache-control
                       · content-type    · origin
                       · content-md5     · range

Bºmanipulating properties/metadata in .NETº
    cloudBLOBclient client = cloudStorageAccount.          ← client allows access to file shares

  cloudBLOBcontainer container = client                    ← reference an specific container
  container.createIfNotExists();                           ← ensure that it exists (hydratated reference)

  await container.fetchAttributesAsync();                  ← retrieve properties and metadata
  BLOBContainerProperties props =;    ← props can be used now to set/changed
                                                             etag            Used for optimistic concurrency.
                                                                             (Continue if etag is not old)
                                                             publicaccess    level of public access allowed to container
                                                                             (:= BLOB | container | off | unknown)
                                                             hasLegalHold    container has an active legal hold?
                                                                             it help ensure that BLOBs remain
                                                                             unchanged until the hold is removed.
                                                             hasImmutabilityPolicy  helps to ensure BLOBs are stored
                                                                             for a minimum amount of retention time.

  BLOBcontainermetadata? metadata = container.metadata;   ← metadata can now be set/changed
                         metadata.add("doctype", "textdocuments");
                         metadata["category"] = "guidance";
  await container.setMetadataAsync(); // ← persist new metadata

BºLease BLOB operationsº
  - sets/manages lock on a BLOB for write/delete operations.
  - lock duration:º15 to 60 secs or infiniteº
                  (60 secs ver ˂2012-02-12)

  - Lease Operations:
    -ºAcquireº: request new lease
    -ºRenew  º: renew existing lease
    -ºChange º: change ID of existing lease.
    -ºReleaseº: free lease /lock
    -ºBreak  º: end the lease but ensure that another client cannot
                acquire a new lease until the current lease period
                has expired.

            ⅋timeout=... ← optional
     for the emulated storage service)

    request header     description
    authorization      required. authentication scheme, account name, and signature.
    date or            required. specifies the coordinated universal
    x-ms-date          time (utc) for the request.
    x-ms-version       optional. specifies the version of the operation to
                       use for this request.
    x-ms-lease-id:     id required to renew, change, or release the
                       the value of x-ms-lease-id can be specified in any valid
                       guid string format.
    x-ms-lease-action  acquire¦renew¦change¦release¦break
    x-ms-lease-duration -1 if never  expires ¦ n
    x-ms-proposed-lease-id "id" optional for acquire, required for change.
                        guid string format.
    origin              optional. (for cross-origin resource sharing headers)
    x-ms-client-request-id  optional. provides a client-generated, opaque
                        value with a 1 kb character limit recorded in analytics
    → PUT
    → http/1.1
    → request headers:
    → x-ms-version: 2015-02-21
    → x-ms-lease-action: acquire
    → x-ms-lease-duration: -1
    → x-ms-proposed-lease-id: 1f812371-a41d-49e6-b123-f4b542e851c5
    → x-ms-date: $date
    → authorization: sharedkey
    → testaccount1:esskmoydk4o+ngtutyeolbi+xqnqi6abmiw4xi699+o=

    ←     acquire: "OK" status: 201 (created).
    ←     renew:   "OK" status: 200 (ok).
    ←     change:  "OK" status: 200 (ok).
    ←     release: "OK" status: 200 (ok).
    ←     break:   "OK" status: 202 (accepted).

    ←    etag
    ←    last-modified
    ←    x-ms-lease-id: id
    ←    x-ms-lease-time: seconds
    ←    x-ms-request-id
    ←    x-ms-version
    ←    date
    ←    access-control-allow-origin
    ←    access-control-expose-headers
    ←    access-control-allow-credentials
    ←    authorization

Browser JS API
Browser Web Cam → upload to Azure Blob storage how-to:

  - azure-storage NPM: maintained by Microsoft , it allows
    to upload the blob directly in web JS client code:

    const img      = this.canvas.current.
                     toDataURL('image/jpeg', 1.0).split(',')[1];
    const buffer   = new Buffer(img, 'base64');
    const fileName = `${new Date().getTime()}.jpg`;

    const BASE_URL = 'https://*`;
      createBlobService('UseDevelopmentStorage'). \
        (error, result, response) =˃ {
          if (error) {
            // ... handle error
          const url = BASE_URL + '/${fileName}';
          this.canvas.current.toBlob((blob: Blob) =˃ {
              this.props.closeModal(url, blob);
implement authentication
Active Directory
Azure Active Directory
PRICE: Free for less than 500_000 objects.

REF: Microsoft cloud identity for enterprise architects

  SaaS                  Azure PaaS           Azure IaaS (VMs,...)
 ┌───────────────────┐ ┌──────────────────┐ ┌──────────────────────┐
 │        ┌─────────┐│ │┌───────┐┌───────┐│ │┌───────┐ ┌──────────┐│
 │ Office │MS Intune││ ││LOB app││││ ││LOB app│ │LOB app@VM││
 │ 365    └─────────┘│ │└───────┘└───────┘│ │└───────┘ └──────────┘│
 │                   │ └──────────────────┘ └─────^──────────^─────┘
 │        ┌─────────┐│  LOB: Line of business     │          │
 │        │Dynam.CRM││                            │          │
 │        └─────────┘│                            │          │
 └───────────────────┘                            │          │
│ºAzure AD Integrationº                     ┌─────┴──┐  ┌────┴──────┐│
│                                           │ Domain │  │Extend     ││
│                                           │Services│  │on─premises││
│                                           └────────┘  │directory  ││
│                                                       │services to││
│                                                       │your Azure ││
│                                                       │VM         ││
│                                                       └───────────┘│
├─ºon─premises infra integrationº           │ (Free, Basic, Premium)
│  ├─ Synch. or federation of identities    │ - Synchronization or federation with
│  ├─ Self_services password reset with     │   on-premises directories through Azure
│  │  write back to on_premises direc.      │   AD Connect (sync engine)
│  ├─ Web App Proxy for auth. against       │ - Directory objects
│  │  on_premises web based apps.           │ - User/group management
├─ºUser Accountsº                           │   (add/update/delete), user-based
│  ├─ MyApps Panel                          │   provisioning, device registration
│  ├─ Multi_facto Authentication            │ - Single sign-on (SSO)
│  ├─ Conditional access to resources       │ - Self-service password change for
│  │  and apps                              │   cloud users
│  ├─ Behaviour and risk_based access       │ - Security and usage reports
│  │  control with ID protection            │
├─ºDevicesº                                 │ (Basic, Premium only)
│  ├─ Mobile device management with Intune  │ - Group-based access management and
│  ├─ Windows 10 Azure AD Join and SSO      │   provisioning
│  └─ Device Registrartion and management   │ - Self-service password reset for cloud
│     for non_Win.devices (Android, ...)    │   users
├─ºPartner Collaborationº                   │ - Company branding (logon pages, Access
│  └─ Secure collaboration with business    │   Panel customization)
│     partners using Azure AD B2B           │ - Application Proxy
│     collaboration                         │ - Enterprise SLA of 99.9
├─ºCustomer Account Managementº             │
│  └─ Self_registration for customers using │ (Premium only)
│     a unique ID or existing social        │ - Self-service group and app
│     identity with Azure AD B2C            │   management, self-service application
├─ Application Integration                  │   additions, dynamic groups
│  ├─ Pre_integrated with 1000s of SaaS     │ - Self-service password reset, change,
│  ├─ Deep Integra. with Office365          │   unlock with on-premises writeback
│  ├─ Cloud App Discovery                   │ - Multi-factor authentication (cloud
│  ├─ PaaS app. integration                 │   and on-premises, MFA Server)
│  ├─ Domain Services                       │ - MIM CAL + MIM Server
│  └─ Integration with AWS, ....            │ - Cloud App Discovery
└─ Administration                           │ - Connect Health
   ├─ Reporting                             │ - Automatic password rollover for group
   ├─ Global Telemetry and                  │   accounts
   │  machine learning
   ├─ Enterprise scale
   ├─ Worldwide availability
   └─ Connect Health

A.Accounts vs Tenant Subs. @[]
.Net/C# usage
OºSystem.Security.Claims.ClaimsPrincipalº mscorlib (4.5+)
  Set of attributes that defines the (authenticated) user
       in the context of your application:
       name, assigned roles, ... .

  - An app can implement authentication with different:
    - protocols
    - token formats
    - providers
    - consumption
    - development stacks

OºClaimsPrincipal represent the outcome of the auth independently of any methodº

  - Usually ClaimsPrincipal is "fetch" from:
    - Thread.CurrentPrincipal
    - HttpContext.Current.User.
    - Custom source (ClaimsPrincipalSelector)
  - example:
    │ ClaimsPrincipal cp = ClaimsPrincipal.Current;
    │ string givenName = cp.FindFirst(ClaimTypes.GivenName).Value;
    │ string   surName = cp.FindFirst(ClaimTypes.Surname  ).Value;
AD Federation Services
- A Guide to Claims-Based Identity and Access Control (2nd Edition)

- REF: @[]
- AD domain controller
- join ADFS server to Domain
- Certificates for both ADFS and claims aware App Server

- Add trust amongst ADFS an App Server certificates

- TODO: Using ADFS as ID Provider for Azure AD B2C
AD Domain Services
ADDS allows VMs to join a domain avoiding new domain controllers,
by reusing Azure AD:
User → VM: login
VM   → Corporate_AD: login
ADDS feautes:
- domain join
- Kerberos authentication
Microsoft identity platform
BºAzure AD main use-casesº:
  - single-page application (SPA)
  - web browser to web application
  - native application to web API
  - web application to web API
  - daemon or server application to web API

BºApp  parameters for registration in ADº:
  -ºapplication ID URIº: (authentication phase)
    Identifying the targeted(third party) app we want
    access (=="token") for.
    This URI will also be included in tokens during AA phase.
  -ºreply URL, redirect URI:º
    -ºreply URLº   : Webhook receiving auth token
    -ºredirect URIº: unique ID to which Azure AD
                     will redirect the user-agent in an
                     OAuth 2.0 request.
  -ºapplication IDº: (returned during registation).
                     used in requests for  new
                     Oºauthorization-codeº or Oºtokenº
  -ºkeyº           : sent back in AA along ºapplication IDº
                     on new Authorization requests

BºApps classification regarding Identity:º
  -ºsingle tenantº: App look in its own directory for a user
                    tenant endpoint might be like:
  - ºmulti-tenantº: App needs to identify a specific user from
                    all the directories in Azure AD.
                    common authentication endpoint exists:

  - Azure AD uses same signing key for all tokens in all
    directories, single or multi-tenant application.

BºApps roles regarding Identity:º
-ºclientº         role : consume (third-party) resources
-ºresource serverºrole : exposes APIs to be consumed by AA app-clients
-ºclient+reso.srvºrole :

BºExample Multitenant Layoutº:
    adatum Tenant  : used by "HR APP" Resource Provider
    contoso        : used by contoso org ("HR APP" consumer)

adatum ─ 1 ─→  "HR app" srv.principal   "HR APP"-keys will be used
    │←-------  @HOME-TENANT             by "HR APP" app
    │                                   ^
    └───────→  "HR app"                 │
               registration             │
               @CONTOSO-TENANT          │
                  │                     │
                  │2 admin completes    │
                  │ consent             │
                  v                     │
 contoso ─3─→  add "HR APP" srv.ppal ───┘
               it represents the use of an instance
               of the application at runtime, governed
               by the permissions consented by the
               the admin of the CONTOSO-TENANT.
               (What privileges the third-party
               "HR app" is granted in local tenant).
               CONTOSO-TENANT will send Oºaccess tokenº
               to other apps with claims describing
               permissions(scopes) granted to
               (third service) "HR app".

BºScopes (=="permissions") defined in Azure ADº:
  -Bºdelegated permissionsº:
     present-user (in session?) delegates to registered
     third-party app.
     (directly or through intermediate admin consents)
  -Bºapplication permissionsº:
     background/daemons services can only be consented
     by an admin.

Bºpermission attributes in A.AD adminº
  -ºID       º: App GUID.
                - ex: 570282fd-fa5c-430d-a7fd-fc8dc98a9dca
  -ºisEnabledº: is available for use?
  -ºtype     º: user|admin consent
  -ºvalue    º: string used to identify permission during
                OAuth 2.0 authorize flows.
               ºIt may also be combined with app ID URIº
               ºto form fully qualified permission.    º
                - ex:
  -ºadminconsentdescriptionº: shown to admins
  -ºadminconsentdisplaynameº: friendly name
  -ºuserconsentdescription º: shown to users
  -ºuserconsentdisplayname º: friendly name_of_app

Bºtypes of consentº
  -BºSTATIC USER CONSENTº: occurs automatically during the
    OAuth 2.0 authorize flow specifying the targeted resource
    that registered app wants to interact with.
    - registered app must  have previously specified all
      needed permissions (In the A.Portal,...).
  -BºDYNAMIC USER CONSENTº: Azure AD app model v2.
    app requests permissions that it needs in the OAuth 2.0
    authorize flow for v2 apps.
    note: ALWAYS set the static permissions (those specified in
    application AD registration) to be the superset of the
    dynamic (codified as query-param) permissions requested at
    runtime to avoid admin-consent flow errors.
  RºWARNº: dynamic consent presents a big challenge for
           permissions requiring admin consent
           since the admin consent experience
           doesn't know about those permissions at consent
           time. if you require admin privileged permissions
           or if your app uses dynamic consent, you must
           register all of the permissions in the Azure
           portal (not just the subset of permissions that
           require admin consent). this enables tenant
           admins to consent on behalf of all their users.

 -BºADMIN CONSENTº: required for certain high-privilege permissions.
   it ensures that admins have additional controls

  - resources should follow the naming pattern
    subject.permission[.modifier], where:
    - subject : type of data available
    - permission: action that user may take upon that data
    - modifier: describe specializations of  permission
OpenID connect
BºOAuth "vs" OpenID connectº
  - OAuth 2.0  : protocol to obtain Oºaccess-tokensº to
                 protected resources.
  - OpenID con.: - built on top of OAuth to provide
                   in form of an Oºid_tokenº verifying end-user identity
                   basic profile information.
                 - Bºrecommended for web applicationd accessed via browserº

  - register "application ID" within App-Tenant)
    portal → top right account → switch Directory
    → choose Azure AD tenant (ignorefor single A.AD tenant users)
      → Click app registrations → new application:
        fill in data:
        - sign-on URL   ← ºweb applicationsº    URL  where users can sign in
        - redirect URI  ← ºnative applicationsº URL to return tokens to
                                                 ex: http://myfirstaadapp.
      → Write down application id.

BºOpenID configurationº:
    - JSON required to perform sign-in is Located at:º${tenant}º/.well-known/OpenID-configuration
    - The JSON includes:
      - URLs to use
      - location of the service's public signing keys.
        "token_endpoint_auth_methods_supported": [

BºOpenID flowº:
  user     → browser: sign-in page
  browser  → browser: redirect to /authorize endpoint.
  browser  →      AD: GET
                        ⅋response_type=id_token              ← alt 1. get only id_token
                        ⅋response_type=id_tokenoº+codeº      ← alt 2. get also access_token
                        ⅋← targeted resource to get access to
                       º⅋nonce=123423412º                    ← avoid replay attacks

  browser  ←    AD: (successful response example)
                    POST /myapp/ http/1.1
                    HOST: localhost
                    CONTENT-TYPE: application/x-www-form-URLencoded

                  Oº⅋code=awaba⅋   ← alt 2: access_token also requested.
                          JWT (async signature). Can be validated in
                          browser, but ussually validation is left to
                          backend servers (vs web clients). Browser
                          will just keep in charge of keeping it safe.

 └ once JWT signature validated -- backend servers will  need  to verify
   next claims in JWT/Tenant:
   - ensure user has signed up for the app.
   - ensure user has proper authorization/privileges
   - ensure certain strength of authentication has occurred

 └ once Oºid-tokenº validated, Gºapp user session can startº.
   - GºClaims in JWT Oºid_tokenº will be used to retrieve user-infoº
     Gºfor the appº

BºSign-out requestº
 - it is not sufficient to clear app's cookies.
   redirect the user to ºend_session_endpointº (listed in JSON metadata)
   otherwise Rºuser can reauthenticate to the appº without
   entering their credentials again, since it still have
   a valid single sign-on session with the Azure AD endpoint.

   GET https://.../OAuth2/logout?
  ºSingle sign-outº:
   AD clears the user's session from the browser. it will also send
   an http get request to the registered logouturl of all the
   applications that the user is currently signed in to.
   (and registered by this tenant).
   applications must respond to this request by clearing any
   session that identifies the user and returning a 200 response.

   to setup the logout url in app code:
   - set logouturl in
     → a.portal → select AD for account
       → choose app registrations
         → choose app → settings → properties → logout URL text box.
- Provides the Access in "AAA"
RºWARNº: Longest list of security concerns in the OAuth2 specification
- approach implemented by ADAL JS
- recommend for SPA applications.

- authorization grant that uses Bºtwo separate endpointsº:.
  (no need to authenticate the -JS SPA- client )
  -Bºauthorization URLº: used for user-interaction phase.
                         generates Gºauthorization codeº.
  -Bºtoken         URLº: used by JS SPA client to exchange
                       Gºauthorization codeº by Oºaccess tokenº
                                                Oºid     tokenº (in OpenID)
                       - web apps are required to present their
                       Gºown app. credentialsº to allow
                         authorization server authenticate the
                         JS SPA client.

   Bºcross origin calls are eliminatedº

- in original OAuth2 specification, tokens are returned in a URI
  fragment making them available to JS code, also warrants that they
  will not be included in redirects toward the server.

- Bº:OAuth2 implicit grant never returnº Gºrefresh tokensº
  to the client mostly for security reasons.
  refresh token less narrowly scoped granting more privileges.
  hence inflicting far more damage in

- app will use a hidden iframe to perform new token requests
  against the A.AD authorization endpoint.
  - Theºartifact that makes the silent renewal possible,º
    theºAzure AD session cookie,º is managed outside of
    the application.
  - Signing out from Azure AD autoamtically disable renew-process.
  - the absence of the a.AD session cookie in native clients
    discourages this method.

Authorization Code Grant - OAuth spec section 4.1 - perform authentication and authorization in most application types. (web/native apps,...) BºPRE-SETUP:º - register your app with Azure AD. - get OAuth 2.0 authorization endpoint for your Azure AD tenant tenant by selecting app registrations → endpoints Bºauthorization flowº user ← client: redirect to the /authorize - permissions needed from user client → AD:{tenant}/OAuth2/authorize? client_id=6731de76-14a6-49ae-97bc-6eba6914391e º⅋response_type=codeº ⅋redirect_uri=http%3a%2f%2flocalhost%3a12345 ← (opt) authen resp must match registered URLenc(redirect_uris) urn:ietf:wg:OAuth:2.0:oob for native/mob apps º⅋response_mode=queryº ← (opt)query*|fragment|form_post ⅋state=12345 Gº⅋º ← target web API └─────────┬──────────────┘ portal → Azure AD → application registrations → application's settings page → properties. ⅋prompt=login (opt) user interaction type := ┌────────────────────────┴───────────────────┘ └→├─ login: user should be prompted to reauthenticate. ├─select_account: user prompted to select an account, │ interrupting single sign on. │ ─ user may select existing signed─in account, │ enter credentials for a remembered account or │ ─ use a different account altogether. ├─ consent: user consent has been granted, but needs to │ be updated. user should be prompted to consent. └─ admin_consent: dmin should be prompted to consent on behalf of all users in their org ⅋login_hint=... (opt) pre-fill the email ... ⅋domain_hint=... (opt) tenant hint if federated to on-premises AD, aad redirects to specified tenant federation server. ⅋code_challenge_method=... (recommened) method used to encode code_verifier for code_challenge parameter. := plain | s256. ⅋code_challenge=... (recommened) - secure authorization code grants via proof key for code exchange (pkce) from a native or public client. - required if code_challenge_method on user_cli ← AD: form user_cli → AD: form response user_cli ← AD: get http/1.1 302 found location: http://localhost:12345/? Oºcode=kaplreqdfsbzjq...º ← app request Oºaccess tokenº with it ⅋session_state=7b29111d-.. ← opaque guid ⅋state=12345 ← avoids cross-site request forgery (csrf) attacks against the client. user_cli → AD: post /{tenant}/OAuth2/token http/1.1 host: content-type: application/x-www-form-URLencoded Oºgrant_type=authorization_codeº º⅋client_id=...º Oº⅋code=....º ⅋redirect_uri=https%3a%2f%2flocalhost%3a12345 ⅋ ⅋client_secret=p@ssw0rd user_cli ← AD: { Gº"access_token": "eyj0..."º, ← JWT used to authenticate "token_type": "bearer", to resource "expires_in": "3600", "expires_on": "1388444763", "resource" : "", "refresh_token": "...", "scope" : "", "id_token": "..." } user_cli → call resource resource: get /data http/1.1 host: Gºauthorization: bearer ${JWT}º user_cli ← resource: http/1.1 200 www-authenticate: bearer authorization_uri= "", └──────────────────────┬──────────────────────────────┘ client must validate this in response - resource_id="unique identifier of the resource." user_cli can use this identifier as the value of the resource parameter when it requests a token for the resource. - recommended strategy to prevent attack: verify resource_id startswith web API URL being accessed. - client application must reject a resource_id not beginin with base URL user_cli → AD: refresh token post /{tenant}/OAuth2/token http/1.1 host: content-type: application/x-www-form-URLencoded client_id=6731de76-14a6-49ae-97bc-6eba6914391e ⅋refresh_token=... ⅋grant_type=refresh_token ⅋ ⅋client_secret=jqqx2pno9bpm0ueihupzyrh user_cli ← AD: { "token_type": "bearer", "expires_in": "3600", "expires_on": "1460404526", ← expiration timestamp "resource" : "", Gº"access_token": "...",º "refresh_token": "..." } note: refresh tokens are valid for all resources that your client has already been given consent to access - thus, a refresh token issued on a request for resource= can be used to request a new access token for resource= - in occassion refresh tokens expire/revoked/lack privileges. app must handle such errors by restarting autho.flow.
client credentials grant - OAuth 2 spec. Section 4.4 - client credentials grant flow permits a web service (confidential client) to use its own credentials instead of impersonating a user, to authenticate when calling another web service. (machine-to-machine) - Azure AD also allows the calling service to use a certificate (instead of a shared secret) as a credential. BºPRE-SETUPº - both client app and targeted resource must be registered to Azure AD. BºFlow diagramº service → Azure AD : authenticate +request token ${tenant}/OAuth2/token service ← Azure AD : access token service → resource : request ( access token) post / http/1.1 host: content-type: application/x-www-form-URLencoded - alt 1 : shared secret ?grant_type=client_credentials ⅋client_id=... ⅋client_secret=.. (shared_secre) ⅋resource=app_id_uri targeted service - alt 2 : certificate ?grant_type=client_credentials ⅋client_id=... ⅋resource=app_id_uri targeted service ⅋client_assertion_type= urn:ietf:params:OAuth:client-assertion-type:JWT-bearer client_assertion= JSON web token app needs to create and sign with registered cert service ← resource : { "access_token":"...", "token_type":"bearer", "expires_in":"3599", "expires_on":"1388452167", "resource":"" }
VM/... managed identities
- Azure AD feature
- Gºfree with Azure AD for Azure subscriptionsº
  Gºno additional costº
- formerly known as managed service identity (msi)

-ºproblemº: how to manage credentials in your code for authenticating
  to cloud services and keep credentials secure.
  -  Azure key vault provides a way to securely store
     credentials, secrets, and other keys,
   Rºbut your code has to authenticate to key vault to retrieve themº

-ºmanaged identities solutionº:
  app use the (AD) manage identity to authenticate to any service
  that supports Azure AD authentication, including key vault,
  without any credentials in your code.

-ºclient idº   : UID generated by Azure AD, tied to an application
                 and service principal during its initial provisioning.

-ºprincipal idº: Object ID of service principal object for
                 your managed identity used to grant RBAC
                 to an Azure targeted resources.

BºAzure instance metadata service (IMDS)º:
  accessible to all IaaS VMs created via ARM at local URL:

  - Types of Gºmanaged-identitiesº:
    -Gºsystem-assignedº: enabled directly on an VM instance.
      an identity for the instance is created in Azure AD.
      as well as credentials with lifecycle == VM lifecycle

      - created as standalone resource with associated
        AD identity in a (trusted-by-subscription) tenant .
      - The identity can be re-assigned to 1+VMs/... instances.

    - app code can use the managed identity to request (OAuth)
      access tokens for services supporting Azure AD authentication.

     - RBAC in Azure AD is used to assign the role to the
       VM service principal.
       "Key-vault" grant-access must be set.

  BºAzure VM system-assigned flowº
    (ARM = Azure Resource Manager)
    admin → ARM: enable system-assigned Gºmanaged-identityº on a VM.
    ARM   → ARM: create VM service-principal in (trusted-by-subscription)
    ARM   → VM : - updates Azure instance metadata service identity endpoint
                  with the service principal client id and certificate.
                - provisions VM extension (planned for deprecation
                  january 2019), and service principal client ID/Cert.
                  (planned for deprecation.)
    --- requesting tokens --
    app@VM → VM:
                 ?API-version=2018-02-01 (or greater)     ← IMDS version
                                      target resource (ARM)
                 optional params:
                 ⅋object_id   object_id of managed identity the token is for.
                              required, if VM has multiple identities
                 ⅋client_id   client_id of managed identity token is for.
                              required, if VM has multiple identities
                 http/1.1 metadata: true   ← required as mitigation against
                                             server side request forgery
                                             (ssrf) attack.

    app@VM ← VM: JSON with OºJWT access tokenº
                 ← http/1.1 200 ok
                 ← content-type: application/JSON
                 ← {
                 ← Gº"access_token": "eyj0exai...",º
                 ←   "refresh_token": "",
                 ←   "expires_in": "3599",
                 ←   "expires_on": "1506484173",
                 ←   "not_before": "1506480273",
                 ←   "resource": "",
                 ←   "token_type": "bearer"
                 ← }

  BºUser-assigned flowº
    admin → ARM : request create a user-assigned managed identity.
    ARM   → AD  : create service principal for user-assigned
                Gºmanaged identityº.
    AD    → AD  : update A.VM instance metadata service
                  identity endpoint with user-assigned
                  managed identityºservice principal client idº
                  and ºcertificateº.
    ARM   ← AD  : request to configure the user-assigned
                  managed identity on a VM
    ARM   → VM  : provisions VM extension
    ARM   → VM  : (planned for deprecation)
                  add user-assigned managed identity service
                  principal client id and certificate.

Bºenable system-assigned managed identityº
    admin account needs theºVM contributorºrole set.
    (no other role is needed)
  $ az login
  $ az group create  \               ← create a resource group
    --name group01 --location westus
  $ az VM create \                   alt1: assign at VM creation time
    --resource-group group01
    --name myvm
    --image win2016datacenter
    --admin-username Azureuser
    --admin-password mypassword12

  $ az VM identity assign \          ← alt2:  assign for existing VM
    -g group01 -n myvm

  $ az VM update \                  ← de-assign option 1:
    -n myvm                           no longer needs system-assigned ID.
    -g group01                        butºstill needs user-assigned IDsº
    --set \                           use the following command:
    identity.type='userassigned' ←───────────────────┘

  $ az VM update                    ← de-assign option 2:
    -n myvm
    -g group01 \                      no longer needs system-assigned id.
    --set identity.type="none"      ← it has no user-assigned identities.

  $ az VM identity \                ← remove managed id VM extension
    --resource-group group01 \        (planned for deprecation)
    --VM-name myvm \
    -n managedidentityextensionforwindows

Bºenable user-assigned managed identityº
    admin account needs theºVM contributorºrole set.
    (no other role is needed)

  $ az identity create -g group01 -n myuserassignedidentity
      "clientid": "73444643..",
      "clientsecreturl": "https://control-westcentralus.identity.../subscriptions/
                          $ubscripton id/resourcegroups/$resource group/providers/
    Gº"id": "/subscriptions/$subscripton_id/resourcegroups/º ← resource id
    Gº        $resource_group/providers/microsoft.managedidentity/ º
    Gº        userassignedidentities/$user assigned identity name",º
      "location": "westcentralus",
      "name": "$user assigned identity name",
    Gº"principalid": "e5fdfdc1-ed84-4d48-8551-fe9fb9dedfll",º
      "resourcegroup": "$resource group",
      "tags": {},
      "tenantid": "733a8f0e-...",
      "type": "microsoft.managedidentity/userassignedidentities"

Bºtoken cachingº
  ""we recommend to implement token caching in your code.
    prepare for scenarios where the resource indicates that
    the token is expired.""

BºEx. allow VM access to a storage-accountº
  $ az login

  $ SPID=$(az resource list \
              -n myvmOrScaleSet01 \
              --query [*].identity.principalid \
              --out tsv)
    get service principal for VM (or scaleSet) named 'myvmOrScaleSet01'

  $ az role assignment create \      ← Grant reader role
    --assignee $SPID                   in storageAccount in Resource Group
    --role 'reader'
    --scope /subscriptions/$subid/resourcegroups/$group
certificate, form/token authentication
  - in the context of microsoft Azure, certificate-based
    authentication enables to be authenticated by Azure AD
    with a client certificate on a windows/mobile device
    when connecting to different services, including
    (but not limited to):
    - Custom services authored by your organization
    - Microsoft Sharepoint online
    - Microsoft Office 365 (or Microsoft exchange)
    - Skype for business
    - Azure API management
    - Third-party services deployed in your organization

  Useful  where an organization has multiple front-end apps
  communicating with back-end services.
  Traditionally, the certificates are installed on each server,
  and the machines trust each other after validating
  For example, mutual authentication can be enabled to
  restrict client-access.

BºForms-based authenticationº
  - It isRºNOTº an internet standard.
  - appropriate only for web APIs that called from a web app
    so that the user can interact with the html form.
  - Disadvantages:
    - it requires a browser client to use the html form.
    - it requires measures to prevent cross-site request forgery (csrf).
    - user credentials are sent in plaintext as part of an http request.
      (Citation needed).

BºWindows-based authenticationº
  - integrated windows authentication enables users to log in with their
    windows credentials usingºKerberos or NTLMº.
    client sends credentials in the authorization header.
    windows authentication is best suited for an intranet environment.
    - difficult to use in internet applications without exposing
      the entire user directory.
    - it can’t be used in bring-your-own-device (byod) scenarios.
    - it requires kerberos or integrated windows authentication (NTLM)
      support in the client browser or device.
    - the client must be joined to the active directory domain.

Bºclaims-based authentication in .NETº
Bº└ asp.NET identity
    - unified identity platform for asp.NET
    - target enviroments:  web, phone, store, or hybrid applications.
    - core features to match token-based authentication:
      - implements a provider model for logins.
        (today local AD, tomorrow Azure AD, social networks)
      - support for ºclaims-based authenticationº
     ☞Qºclaims allow developers to be a lot more expressive in º
      Qºdescribing a user's identity than roles allowº.
      Qºwhereas role membership is just a boolean º
      Qºvalue (member or non-member), a claim can include rich informationº
      Qºabout the user's identity and membership. most social providersº
      Qºreturn metadata about the logged-in user as a series of claims.º

Bº└ app service authentication and authorizationº
    - built-in authentication and authorization
    - sign-in users and access data byºwriting minimal/no code in appº
    - authentication+authorization module runs in
     ºsame sandbox as your application codeº.
    -ºwhen enabled, every incoming http request passes through itº
      before being handled by your application code.

    - all authn/authz logic, including cryptography for token validation
      and session management,ºexecutes in the worker sandbox and outside ofº
     ºweb app codeº.
      - module is configured using app settings.
      - no SDKs/code-changes required.

   - identity information flows directly into the application code.
     - for all language frameworks, app service makes the user's
       claims available to your code by injecting them into the
       request headers.
         -ºclaimsPrincipal.currentºis populated with authenticated
           user's claims following the standard .NET code pattern.
           including the [authorize] attribute.
         - _server[‘remote_user’] is populated.

Bº└ built-in token storeº
    - repository of tokens associated with app-users, APIs.
      automatic refresh.

implement multi-factor authentication Bºin the world of security, it is often recommended to have º Bºtwo of the following factors: º Bº knowledge – something that only the user knows (security º Bº questions, password, or pin). º Bº possession – something that only the user has (corporate badge,º Bº mobile device, or security token). º Bº inherence – something that only the user is (fingerprint, face,º Bº voice, or iris). º Azure multi-factor authentication (MFA) built in to Azure AD. administrators can configure approved authentication methods. - two ways to enable MFA: -ºenable each user for MFAº. users will perform two-step verification each time they sign in (exceptions: trusted ip addresses, remembered devices feature is turned on). - set up aºconditional access policyºthat requires two-step verification under certain conditions. it uses the Azure AD identity protection risk policy to require two-step verification based only on the sign-in risk for all cloud applications. -OºMS authenticator appº(available in Android/iOS public Stores) └ can be used either as: - second verification method - replacement for a password when using phone sign-in. └ for verification in MFA it supports: - verification code - notification methods -ºmulti-factor authentication SDKº: - Allows to build two-step verification directly into the sign-in or transaction processes of applications in your Azure AD tenant. - Available for c#, visual basic (.NET), java, perl, php, and ruby.
Access Control
Claims-based authorization 
- claim: (String)name/(String)value pair that represents what the
         subject is and not what the subject can do.

Bºclaims-based authorizationº
  └ approach:
   ºgrant/deny authorization  decision  based on º
   ºarbitrary logic that uses data available in claims as input.º
    at its simplest, checks the value of a claim and
    decide based on that value.
  └ claim-basedºauthorization checks are declarativeº
    (embedded in controller|controller-action code)

  └ claims maps to policies in asp.NET. Example

    public void
    configureServices(iServiceCollection services)
        options =˃ {
           options.addPolicy                  ← 1) Add policy
             (Gº"withIDOnly"º, policy =˃
               policy.requireClaim("ID")); // ← 2) map claim to policy
            // policy.requireClaim("ID", "1", "2")); // only 1/2 values for claim

    [authorize(policy = Gº"withIDOnly"º)]   // ← 3) Apply policy to all class
    public class
    vacationcontroller : controller {
      public ActionResult
      vacationbalance()  { ... }

      [allowanonymous]                      // ← 4) For this method
      public ActionResult                           allow anyone in
      vacationpolicy() { ...  }
RBAC authorization 
  - (RBAC) helps you manage who has access to ºAzure resourcesº
    (versus controller code), what can be done, and what areas
    they have access to.
  - ºbuilt on ARMº (A.Resource Manager)
  - ☞ best practice:
    Gºgrant users the least privilege to get work doneº
  - control access:
    role assignments:
      │security │ N ←·····│   role   │····→ M │scope│
      │principal│         │definition│
             ^                ^                  ^
    user, group,        - collection          Ex.: resourceGroup01
    service principal,    of permissions ...  scope level:
    managed identity                          - management group
    requesting access                         - subscription
    to Azure resources.                       - resource group
                                              - resource

  -Bºbuilt-in rolesº
     -Bºownerº        : full access to all resources (+delegation)
     -Bºcontributorº  : can create/manage all types of Azure resources
     -                  but ºcan't grantº access to others.
     -Bºreaderº       : can view existing Azure resources.
     -Bºuser accessº  : manage user access to a.resources.
    (other built-in roles exists to manage specific A resources,
     ex: VM contributor, ...)

  -BºDeny assignmentsº
     RBAC supports deny assignments in a limited way.
    ºcurrently, deny assignments are read-onlyºand
     can only be set by Azure.

  -BºRBAC steps to deny/grant accessº:
     - User (service principal) request a token to
       A.AD requesting access to ARM. Returned token
       includes the user's group memberships
       (and transitive group dependencies).
     - User calls ARM REST API with the token attached.
     - ARM retrieves all role/deny assignments applying to
       the resource upon which the action is being taken.
     - ARM determines what roles the user has for
       this resource.
     - ARM checks if requested REST API action is
       included in the user RBAC list for this resource.
       - If check fails access is not granted. (END)
     - ARM checks if a deny assignment applies.
       - If check applies access is not granted. (END)
     - At this point access is granted.

  -BºAD administrator rolesº:
   -ºclassic subscription admin rolesº
    ºaccount admin, service admin, and co-adminº
     - full access to the Azure subscription managing resources
       using the portal, resource manager APIs, and the classic
       deployment model APIs.
     -ºAzure sign up accountº: automatically set as º
       ºaccount + service adminº
        == user with  'owner' role @ scope:subscription
   -ºAzure RBAC rolesº
   -ºAzure ADºadministrator roles

   - classic subscription admin
     │             │ limit       │ permissions
     │account      │ 1 per Azure │ access the Azure account center
     │admin        │ account     │ manage all subscriptions in an account
     │"billing act"│             │ create new subscriptions
     │             │             │ cancel subscriptions
     │             │             │ change the billing for a subscription
     │             │             │ change the service administrator
     │             │             │ subscription owner
     │             │             Rºno access to the Azure portalº
     │service      │ 1 per Azure │ manage services in the portal
     │admin        │ subscription│ assign users to co─admin role
     │             │             Bºfull access to portalº
     │co─admin     │ 200 per     │ same as service admin, but can't
     │             │ subscription│ºchange the association of         º
     │             │             │ºsubscriptions to Azure directoriesº
     │             │             │ assign users to co─administrator role,
     │             │             │ butrºcannot change service adminº

   - Azure AD adminºrolesºcontrol permissions to manage
     Azure AD resources.
     │ Azure RBAC roles                   │ Azure AD administrator roles  │
     │ manage access to Azureºresourcesº  │ manage access to Azure        │
     │                                    │ºAD resourcesº                 │
     │ supports custom roles              │ cannot create custom roles    │
     │ºscopeºcan be specified atºmultipleº│ scope is at therºtenant levelº│
     │ºlevelsº(management group,          │                               │
     │ subscription, resource group,      │                               │
     │ resource)                          │                               │
     │ role info can be accessed in       │ role info can not be accessed │
     │ portal, cli, powershell            │                               │
     │ resource manager templates,        │                               │
     │ REST API                           │                               │

  -BºREST API: list RBAC role assignementsº:
     - PRE-SETUP: roles needed:
      ºmicrosoft.authorization/roleassignments/readº  permission role
       (at the specified scope to ºcall the APIº)
       ^     /roleassignments?API-version=2015-07-01⅋$filter={filter}
   - subscriptions/{subscriptionid}
   - subscriptions/{subscriptionid}/resourcegroups/myresourcegroup1
   - subscriptions/{subscriptionid}/resourcegroups/myresourcegroup1/

   - {filter}: (optional ) condition to apply to role assignment list:
      $filter=atscope()                       do not including subscopes.
      $filter=principalid%20eq%20'{objectid}' list role assignament
                                              for  user/group/service principal.
      $filter=assignedto('{objectid}')        list role assign. for a
                                              specified user, including ones
                                              inherited from groups.
  -BºREST API: grant RBAC accessº
     PUT params:
     - security principal
     - role definition
     - scope.
     - pre-setup:
       - ºmicrosoft.authorization/roleassignments/write operationº enabled.
         (at the specified scope to ºcall the APIº)
       - ºnew uuid for role assignmentº.
          8           4    4    4           12

      note: use '
            list all the identifier for the role definition you want to assign.

        "properties": {
          "principalid": "{principalid}"

  -BºREST API: remove RBAC accessº

asp.NET RBAC - Declarative role-based authorization in controller. - roles exposed through ºclaimsPrincipal::isInRole(..) methodº. º[authorize(roles = "hrmanager,finance")]º // ← one of the roles public class salarycontroller :ºcontrollerº{ ... } º[authorize(roles = "poweruser")]º // ←┬─ all of the roles º[authorize(roles = "controlpaneluser")]º// ←┘ public class controlpanelcontroller º: controllerº { º[authorize(roles = "administrator")]º // ← add required role for public actionresult shutdown() // method { ... } º[allowanonymous]º // ← anonymous access public actionresult login() { ... } } - usingºrole-policy-syntaxº, developerºregisters a policyº ºat startupºas part of the authorization service configuration. .../ºstartup.csº public void configureServices(IServiceCollection services) { services.addMVC(); services.addAuthorization( options =˃ {º options .addPolicy("requireAdminRole", policy =˃ policy.BºrequireRoleº("administrator")); options .addPolicy( "elevatedRights", p olicy =˃ policy.Bºrequireroleº("backup", "root")); º});º } .../*Controller.cs: ... º[authorize(policy = "requireAdminRole")]º public iActionResult shutdown() { ... } ☞ claims-based authorization and role-based authorization can be put together. is it typical to see the role defined as a special claim. the role claim type is expressed using the following URI:ºº
Secure Data
Encryption options
BºEncryption at REST:º
  - encoding (encryption) of dataºwhen it is persisted.
  - attacks against data at REST include attempts to obtain
    physical access to the hardware on which the data is stored and to
    then compromise the contained data.

  - also required for data governance and compliance efforts.
    industry and government regulations, such as the
    health insurance portability and accountability act (hipaa),
    pci dss, and federal risk and authorization management program
    (fedramp), lay out specific safeguards regarding data protection
    and encryption requirements.
  - Azure use symmetric encryption.
    - data may be partitioned, and different keys may be used
      for each partition.
    - keys must be stored in a security-enhanced location with access
      control policies limiting access to certain identities and logging
      key usage.
      data encryption keys are often encrypted with asymmetric
      encryption to further limit access.

  - Azure storage encryption
    - all Azure storage services (BLOB storage, queue storage, table
      storage, and Azure files) support server-side encryption at REST,
      with some services supporting customer-managed keys and client-side
    - all Azure storage services enable server-side encryption
      by default using service-managed keys, which is transparent to the

    - storage service encryption is enabled for all new and existing
      storage accounts andºcannot be disabledº. because your data is
      security enhanced by default, you don't need to modify your code or
      applications to take advantage of storage service encryption.

BºAzure SQL database encryptionº
  - support for microsoft-managed-encryption at REST for:
    - server-side provided throughBº"transparent data encryption"(TDE)º:
      - enabled by default at creation.
      - keys are automatically created and managed by default.
        RSA 2048-bit customer-managed keys in Azure key-vault supported.
      - it can be enabled at levels:
        - database
        - server
    - client-side

BºAzure Cosmos DB encryptionº
- Cosmos DB automatically encrypts all databases,
  media attachments and backups.

end-to-end encryption BºTransparent Data Encryption(TDE)º encrypts SQL server, A.SQL, and A SQL data warehouse data files. BºTDEº performs real-time I/O en/de-cryption of data and log files. Encryption of the database file is performed at the page level. - TDE protect sensitive data (Credit Cards,...): - at REST on the server - during movement between client and server - while the data is in use - encryption uses a DDBB ºencryption keyº(DEK), stored in the database boot record for availability during recovery. ºDEKºis either: -º symmetric keyºsecured with a certificate stored in the master database of the server with AES/3DES encryption algorithms supported -ºAsymmetric keyºprotected by an extensible-key-management (EKM) module. - It also ºallows clientsº to encrypt data inside client apps without revealing encryption-keys to the DDBB engine. -OºIt helps to ensure that on-premises database administrators,º Oºcloud database operators, or other highly privileged butº Oºunauthorized users cannot access the encrypted data.º Oºallowing to delegate DDBB administrationº Oºto third parties and reduce security clearanceº Oºrequirements for database administrators.º ☞it requires a specialized driver installed on the client computer to automatically encrypt and decrypt sensitive data in the client application. - For many applications, this does require some code changes. this is in contrast to TDE, which only requires a change to the application’s connection string.
A.Confidential computing - set of features available in many Azure services that encrypt ºdata in useº - designed for scenarios where data is processed in the cloud while still protecting it from being viewed in plaintext. - collaborative project between hardware vendors like intel and software vendors like microsoft. - it ensures that when data is "in the clear," it is protected inside a Oºtrusted execution environment (TEE)º. TEEs ensures that there is no way to view data or operations inside from the outside,ºeven with a debuggerº and thatºonly authorized code is permitted to access dataº if the code is altered or tampered with, operations are denied and the environment disabled. ☞ note: TEEs are commonly referred to asBºenclavesº. - TEEs are exposed in multiple ways: -ºhardware º: Intel SGX technology -ºsoftware º: Intel SGX SDK and third-party enclave APIs -ºservices º: many Azure services, such as Azure SQL database, already execute code in TEEs. -ºframeworksº: the microsoft research team has developer frameworks, such as the confidential consortium blockchain framework, to help jumpstart new projects that need to run in TEEs. A.Key vault: Security-enhanced secrets store. - vaults are backed by hardware security modules (hsms). - vaults also control and log the access to anything stored in them. - designed to support any type of secret: (passwords, credential, API keys, certificate,...). - handling and renewing of and lifecycle management tls certificates. $ az keyvault create \ ← create new vault --name contosovault \ --resource-group securitygroup \ --location westus (write down the vault URI) $ az keyvault \ ← add a secret ºsecret setº\ --vault-name contosovault \ --name databasepassword --value 'pa5w.rd' $ az keyvault \ ← view value. ºsecret showº\ --vault-name contosovault \ --name databasepassword Manage Certificates $ az keyvault certificate delete --name $ az keyvault certificate purge --name
monitor, troubleshoot, and optimize
Azure monitor
ºAzure monitorº: single integrated experience for
   - log analytics          - Azure resources
   - application insights   - hybrid environments
                   ┌─ Azure monitor ──────────────────────────────┐
                   │              ┌ ┌─insights───────────────────┐│
                   │              │ │-application     -VM        ││
                   │              │ │-container       -monitoring││
                   │              │ │                  solutions ││
                   │              │ └────────────────────────────┘│
                   │              │ ┌─visualize──────────────────┐│
    application ┐  │  ┌────────┐  │ │ -dashboard     -powerBI    ││
             OS ┤  │  │ metrics│  │ │ -views         -workbooks  ││
                │  │  │ DDBB   │  │ └────────────────────────────┘│
    a.resources ┤  │  │        │  │ ┌─analyze────────────────────┐│
                ├────→┤        │─→┤ │ -metrics       -log        ││
 a.subscription ┤  │  │        │  │ │ -analytics     -analytics  ││
                │  │  │ logs   │  │ └────────────────────────────┘│
       a.tenant ┤  │  │        │  │ ┌─respond────────────────────┐│
                │  │  └─^──────┘  │ │ -alerts        -autoscale  ││
 custom sources ┘  │    |         │ └────────────────────────────┘│
                   │    |         │ ┌─integrate──────────────────┐│
                   │    |         │ │ -event -logics -ingest⅋    ││
                   │    |         │ │ -hubs  -apps    export APIs││
                   │    |         └ └────────────────────────────┘│
     ºMETRICSº                       ºLOGSº:
    - numerical values describing   - different kinds of data organized into records
      some aspect of a system atºa    with different sets of properties for each type.
      particular point in time.º    - telemetry (events, traces,...) are stored
    - lightweight (near real-time     as logs in addition to performance data so that
      scenarios)                      it can all be combined for analysis.
                                    - stored in log analytics:
                                     ºrich query languageºto quickly retrieve,
                                      consolidate, and analyze collected data.

- extend the resource-collected-data into the actual operation by:
  1) enabling diagnostics
  2) adding an agent to compute resources.
  this will collect telemetry for the internal operation of
  the resource and allow you to configure different data sources to
  collect logs and metrics from windows and linux guest OSes.

BºApplication Insightsº(web applications)       BºAzure monitor VM insightsº
  - addºinstrumentation packageº                 - analyze performance and health of Win/Linux VMs
  - monitors availability , performance, usage     including:
  - cloud orºon-premisesº                          - processes
  - integrates with visual studio                  - interconnected dependencies on other resources
                                                     and external processes.
                                                 - Cloud andºon-premisesº

BºAzure monitor for containersº                 BºMonitoring Solutionsº
  - automatically collected through               - packaged sets of logic that provide insights for
    (linux)log-analytics-agent.                     a particular application or service.
  - monitor the performance of managed-AKS        - available from Microsoft and partners.
    container workloads
  - collects:
    - memory/processor metrics from
    - container logs are also

BºAlertsº                                          BºAutoscaleº
  - alertºrules based on metricsº: near real time  - autoscale rules use Azure monitor metrics
    alerting based on: numeric values              - you specify a minimum/maximum number of instances
  - alertºrules based on logsº: complex logic        and the logic for when to increase or decrease resources.
    based on: data from multiple sources.
                                                   - data sources that can be organized into tiers:
  - alert rules useºaction groupsº:                  highest tiers: application and OSs
    - they contain unique sets of recipients and     lower   tiers: Azure platform components.
      actions that can be shared across multiple
      ex actions:
      - using webhooks to start external actions
      - integrate with external itsm tools.

- Azure tenant  : Azure active directory like data.

- Azure platform: Azure subscription data, audit-logs from
                  Azure active directory.

- guest OS      : (need agent-install for on-premises machines)

- applications  : (application insights)

- custom sources: log dataºfrom any REST clientº
                  usingºdata collector APIº

- APM Service   : Application Performance Management (APM)
                 ºfor web developersº.

  - target users:ºdevelopment teamº
  - request rates.
  - response times.
  - failure rates
  - popular pages
  - day peaks
  - dependency rates
  - exceptions: analyse aggregated statistics or
                specific instances
    - dump stack trace.
    - both serverºand browserºexceptions are reported.
  - load performance reported byºclient browsersº.
  - OS perf.counters (cpu, memory, network usage)
  - diagnostic trace logs from your app to
   ºcorrelate trace events with requestsº
  - custom events from client/server ºto track business eventsº
    such as items sold,...
Alerts in Azure
Azure monitorºunifiedºalert: (vs classic alerts)
- it includesºLog analytics and application Insightsº

    ┌ºAlert ruleº ────────────────────────────────────────────────┐
    │─ Separated from alerts and actions                          │
    │  (target_resource,alerting criteria)                        │
    │─ Attributes:                                                │
    │  ─ºstateº                : enabled│disabled                 │
    │                                                             │
    │  ─ºtarget resource signal: Observed VMs, container          │
    │                     └────────────┐                          │
    │  ─ºcriteria/Logic test   : ┌─── emitted signal ────┐        │
    │                            percentage cpu            ˃ 70%  │
    │                            server response time      ˃ 4 ms │
    │                            result count of log query ˃ 100  │
    │                            log search query          ...    │
    │                            activity log events       ...    │
    │                            A.platform health         ...    │
    │                            web site availability     ...    │
    │                                                             │
    │  ─ alert name     : user─configured                         │
    │  ─ alert descript :                                         │
    │  ─ severity       : 0 to 4                                  │
    │  ─ action         : a specific action taken when the alert  │
    │                     is fired. (see action groups).          │
  ┌───v───────────┐  ┌───────v─────┐
  │ACTION GROUP   │  │ condition   │
  │               │  │ monitoring  │
  │- actions to do│  │ºalert stateº←- state changes stored in alert-history.
  └───────────────┘  └─────────────┘

OºAlert States:º
- set and changed by the user (vsºmonitor conditionº, set and cleared by system)
  │state       │description             │
  │new         │issue detected, not yet │
  │            │reviewed                │
  │acknowledged│admin reviewed alert and│
  │            │started working on it   │
  │closed      │issue resolved. it can  │
  │            │be reopen               │

create an alert rule
 1) pick the target for the alert.
 2) select a signal from available ones for target.
 3) specify logic to be applied to data from the signal.
Disaster Recovery
Site Recovery
Site Recovery service helps ensure business continuity
by keeping business apps and workloads running during outages. Site
Recovery replicates workloads running on physical and virtual
machines (VMs) from a primary site to a secondary location. When an
outage occurs at your primary site, you fail over to secondary
location, and access apps from there. After the primary location is
running again, you can fail back to it.

Backup Serv.
Backup service: The Azure Backup service keeps your data safe and
recoverable by backing it up to Azure.
apps and services scalability
common autoscale patterns
Bºmonitor autoscale currently applies only to virtualº
Bºmachine scale sets, cloud services, app service -  º
Bºweb apps, and API management services.             º

- autoscale settings can be set to be triggered based on:
  - metrics of load/performance.
  - scheduled date and time.

ºautoscale setting schemaº
 -Bº1 profileº
 -  2 metric rules in profile:
    - scale out (avg. cpu% ˃ 85% for past 10 minutes)
    - scale in  (avg. cpu% ˂ 60% for past 10 minutes)
   | "id": "/subscriptions/s1/resourcegroups/rg1/providers/microsoft.insights/autoscalesettings/setting1",
   | "name": "setting1",
   | "type": "microsoft.insights/autoscalesettings",
   | "location": "east us",
   | "properties": {
   | · "enabled": true,
   | · "targetresourceuri": "/subscriptions/s1/resourcegroups/rg1/providers/microsoft.compute/virtualmachinescalesets/vmss1",
   | Bº"profiles":º[
   | · | {
   | · | · "name": "mainprofile",
   | · | · "capacity": {
   | · | · · "minimum": "1",
   | · | · · "maximum": "4",
   | · | · · "default": "1"
   | · | · },
   | · | ·º"rules":º[
   | · | · | {
   | · | · | ·º"metrictrigger":º{
   | · | · | ·   "metricname": "percentage cpu",
   | · | · | ·   "metricresourceuri": "/subscriptions/s1/resourcegroups/rg1/providers/microsoft.compute/virtualmachinescalesets/vmss1",
   | · | · | ·   "timegrain": "pt1m",
   | · | · | ·   "statistic": "average",
   | · | · | ·   "timewindow": "pt10m",
   | · | · | ·   "timeaggregation": "average",
   | · | · | ·   "operator": "greaterthan",
   | · | · | ·   "threshold": 85
   | · | · | · },
   | · | · | · "scaleaction": {
   | · | · | ·   "direction": "increase",
   | · | · | ·   "type": "changecount",
   | · | · | ·   "value": "1",
   | · | · | ·   "cooldown": "pt5m"
   | · | · | · }
   | · | · | },
   | · | · | {
   | · | · | ·º"metrictrigger":º{
   | · | · | ·   "metricname": "percentage cpu",
   | · | · | ·   "metricresourceuri": "/subscriptions/s1/resourcegroups/rg1/providers/microsoft.compute/virtualmachinescalesets/vmss1",
   | · | · | ·   "timegrain": "pt1m",
   | · | · | ·   "statistic": "average",
   | · | · | ·   "timewindow": "pt10m",
   | · | · | ·   "timeaggregation": "average",
   | · | · | ·   "operator": "lessthan",
   | · | · | ·   "threshold": 60
   | · | · | · },
   | · | · | · "scaleaction": {
   | · | · | ·   "direction": "decrease",
   | · | · | ·   "type": "changecount",
   | · | · | ·   "value": "1",
   | · | · | ·   "cooldown": "pt5m"
   | · | · | · }
   | · | · | }
   | · | · ]
   | · | }
   | · ]
   | }

│ section      │ element name     │ description                                                         │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ setting      │ id               │                                                                     │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ setting      │ name             │                                                                     │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ setting      │ location         │ location can be different from the location of resource being scaled│
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ properties   │ targetresourceuri│ resource id  being scaled. 1 autoscale setting max per resource     │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ properties   │  profiles        │ 1┼ profiles. autoscale engine runs/executes on one profile          │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ profile      │ name             │                                                                     │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ profile      │ capacity.maximum │ it ensures that when executing this profile, does not scale         │
│              │                  │ resource above number                                               │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ profile      │ capacity.minimum │ it ensures that when executing this profile, does not scale         │
│              │                  │ resource below number                                               │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ profile      │ capacity.default │ if there is a problem reading the resource metric  and current      │
│              │                  │ capacity is below the default, scales out to default.               │
│              │                  │ if current capacity is higherit does not scale in.                  │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ profile      │  rules           │ 1...n per profile                                                   │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ rule         │ metrictrigger    │                                                                     │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ metrictrigger│ metricname       │                                                                     │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ metrictrigger│metricresourceuri │ resource id of resource that emits the metric. (probalby same that  │
│              │                  │ resource being scaled)                                              │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ metrictrigger│timegrain         │ metric sampling duration. ex "pt1m" = aggregated every 1 minute     │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ metrictrigger│statistic         │ aggregation method used in timegrain. ex: "average"                 │
│              │                  │ average|minimum|maximum|total                                       │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ metrictrigger│timewindow        │ amount─of─time to look─back for metrics.                            │
│              │                  │ ex: "pt10m" == "every time autoscale runs, query metrics for past   │
│              │                  │      10min"(it avoids reacting to transient spikes)                 │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ metrictrigger│ timeaggregation  │ aggregation method used to aggregate the sampled metrics.           │
│              │                  │ "average" == aggregate sampled metrics taking the average.          │
│              │                  │ average|minimum|maximum|total                                       │
│ ─────────────┼──────────────────┼─────────────────────────────────────────────────────────────────────│
│ rule         │ scaleaction      │ action to take when (metrictrigger of) the rule is triggered.       │
│ scaleaction  │ direction        │ "increase"(scale out)│"decrease"(scale in)                          │
│ scaleaction  │ value            │ how much to in/de─crease resource capacity                          │
│ scaleaction  │ cooldown         │ time─to─wait after a scale operation before scaling again.          │
│              │                  │ ex: "pt10m" := "do not attempt to scale again for another 10min".   │
ºautoscale profiles typesº

- regular profile: you don’t need to scale your resource based on
                   date, day-of-week...
                   youºshould only have one regular profile definedº

- fixed date profile: ex:  important event coming up on
                   december 26, 2017 (pst).

- recurrence profile:  day of the week, ...
  - they only have a start time.  and run until the next
     recurrence profile or fixed date profile is set to start

    "profiles": [
    · { "name": " regularprofile",
    ·   "capacity": { ...  }, "rules": [{ ...  }, { ...  }]
    · },
    · { "name": "eventprofile",
    ·   "capacity": { ...  },
    ·   "rules": [{ ...  }, { ...  }],
    ·   "fixeddate": {
    ·       "timezone": "pacific standard time",
    ·          "start": "2017-12-26t00:00:00",
    ·            "end": "2017-12-26t23:59:00"
    ·   }
    · },
    · { "name": "weekdayprofile",
    ·   "capacity": { ...  },
    ·   "rules": [{ ...  }],
    ·   "recurrence": {
    ·       "frequency": "week",
    ·       "schedule": { "timezone": "pacific standard time", "days": [ "monday" ], "hours": [0], "minutes": [0] }
    ·   }
    · },
    · { "name": "weekendprofile",
    ·   "capacity": { ...  },
    ·   "rules": [{ ...  }]
    ·   "recurrence": {
    ·       "frequency": "week",
    ·       "schedule": { "timezone": "pacific standard time", "days": [ "saturday" ], "hours": [0], "minutes": [0] }
    ·   }
    · }

ºautoscale profile evaluationº
1) looks for any fixed date profile that is configured to run now.
   if multiple fixed date profiles that are supposed to run,
   first one is selected.
2) looks at recurrence profiles.
3) runs regular profile.

→ Azure portal → Azure monitor icon (left nav.pane)
  → click "autoscale setting"
    → open "autoscale blade"
      → select a resource to scale
        → click "enable autoscale"
          → scale setting for new web app
            fill in "name" and click
            → add a rule.
              → change default metric source to "application insights"
                (web app with application insights has
                been configured previously)
                → select the app insights resource in the dropdown
                  → select custom metric based
                    → ...

- a resource can have only one autoscale setting
- all autoscale failures are logged to the activity log.
  activity log alert can be configured to notify via
  email, sms, or webhooks whenever there is an autoscale success/failure.

ºbest practicesº
- ensure the max/min values are different and with adequate margin
- manual scaling is reset by autoscale min and max

- let's look at an example of what can lead to a behavior that may seem
  confusing. consider the following sequence.
  assume there are two instances to begin with and then the
  average number of threads per instance grows to 625.
 - autoscale scales out adding a third instance.
 - next, assume that the average thread count across instance falls to 575.
 - before scaling down, autoscale tries to estimate what the final
   state will be if it scaled in. for example, 575 x 3 (current instance
   count) = 1,725 / 2 (final number of instances when scaled down) =
   862.5 threads. this means autoscale would have to immediately
   scale-out again even after it scaled in, if the average thread count
   remains the same or even falls only a small amount. however, if it
   scaled up again, the whole process would repeat, leading to an
   infinite loop.
 - to avoid this situation (termed "flapping"), autoscale does not
   scale down at all. instead, it skips and reevaluates the condition
   again the next time the service's job executes. this can confuse many
   people because autoscale wouldn't appear to work when the average
   thread count was 575.

 - estimation during a scale-in is intended to avoid “flapping”
   situations, where scale-in and scale-out actions continually go back
   and forth. keep this behavior in mind when you choose the same
   thresholds for scale-out and in.

- autoscale will post to the activity log if any of the following
  conditions occur:
  - autoscale issues a scale operation
  - autoscale service successfully completes a scale action
  - autoscale service fails to take a scale action.
  - metrics are not available for autoscale service to make a scale decision.
  - metrics are available (recovery) again to make a scale decision.

singleton app instances patterns querying resources using Azure cli $ az ... --query $jmespath ^^^^^^^^^ JSON query language jmespath queries areºexecuted on the JSON outputºbefore they perform any other display formatting. - but there's never an order guarantee from the Azure cli. - to make multivalue array outpus easier to query, jmespathº[] operatorº can be used to flatten output ex: $ az VM list → [ ... hundreds of lines ... ] $ az VM list --query \ '[].{name:name, image:storageprofile.imagereference.offer}' └──────────────────────┬─────────────────────────────┘ º{...} projection operatorº → [ → { "image": "ubuntuserver", "name": "linuxvm" }, → { "image": "windowsserver", "name": "winvm" } → ] $ az VM list --query "[?starts_with(storageprofile.imagereference.offer, 'windowsserver')]" └───────────────────────────┬──────────────────────────────────────┘ º[...] filter operatorº: filter result set by comparing JSON properties values $ az VM list --query "[?starts_with(storageprofile.imagereference.offer, 'ubuntu')].{name:name, id:vmid}" └───────────────────────────┬───────────────────────────────┘ └─────┬────────────┘ └──────────────────┬────────────────────┘ [..] filter and {..} projection combined → [ → { → "name": "linuxvm", → "id": "6aed2e80-64b2-401b-a8a0-b82ac8a6ed5c" → } → ] // step 1. create authenticated client Azure Azure01 = Azure.authenticate("Azure.auth").withdefaultsubscription(); └────┬─────┘ └───┬────┘ static method returning authorization an object that can file, contains fluently query resources info for and access their metadata. subscription service-principal. | "Azure.auth" example: v | { | "clientid": "b52dd125-9272-4b21-9862-0be667bdf6dc", | "clientsecret": "ebc6e170-72b2-4b6f-9de2-99410964d2d0", | "subscriptionid": "ffa52f27-be12-4cad-b1ea-c2c241b6cceb", | "tenantid": "72f988bf-86f1-41af-91ab-2d7cd011db47", | "activedirectoryendpointurl": "", | "resourcemanagerendpointurl": "", | "activedirectorygraphresourceid": "", | "sqlmanagementendpointurl": "", | "galleryendpointurl": "", | "managementendpointurl": "" | } note: you can generate a (a)ctive (d)irectory (s)ervice (p)rincipal like: $ az AD sp create-for-RBAC --SDK-auth ˃ "Azure.auth" step 2) use the API: var VMs = Azure01.virtualmachines; // alt 1. sync var VMs = await Azure01.virtualmachines.listasync();// alt 2. async foreach(var VM in VMs) { console.writeline(; } // filter out using linq: ivirtualmachine targetvm01 = VMs.where( VM =˃ == "simple").singleordefault(); console.writeline(targetvm?.id); inetworkinterface targetnic01 = targetvm01.getprimarynetworkinterface(); inicipconfiguration targetipconfig01 = \ targetnic01.primaryipconfiguration; ipublicipaddress ipaddr01 = targetipconfig01.getpublicipaddress(); console.writeline($"ip address:\t{ipaddr01.ipaddress}");
transient faults patterns apps can handle transient errors followingoºstrategiesº: -Oºcancelº: failure isn't transient -Oºretryº: unusual/rare failure. (ex: network packet corrupted,...) retry immediately -Oºretry after delayº: connectivity/busy errors,... for more common transient failures, the period between retries should be chosen to spread requests from multiple instances of the application as evenly as possible. if request is unsuccessful after a predefined number of attempts, the application should treat the fault as an exception and handle it accordingly. c# ex. implementation: private int retrycount = 3; private readonly timespan delay = timespan.fromseconds(5); /// invokes external service asynchronously through /// the transientoperationasync method. public async task operationwithbasicretryasync() { int currentretry = 0; for (;;) { /*^^^^^^^^^*/try { await transientoperationasync(); break; /*vvvvvvvvv*/} catch (exception ex) { trace.traceerror("operation exception"); currentretry++; if (currentretry > this.retrycount || !istransient(ex)) { throw; } /*---------*/} await task.delay(delay); } } private bool istransient(exception ex) { if (ex is operationtransientexception) return true; var webexception = ex as webexception; if (webexception != null) { return new[] { webexceptionstatus.connectionclosed, webexceptionstatus.timeout, webexceptionstatus.requestcanceled }.contains(webexception.status); } return false; }
instrument for monit⅋log
App.Insights: config instrumentation
application insights can be used with any web pages

insert the following script into each page you want to track.
place this code immediately before the closing ˂/head˃ tag,
and before any other scripts. your first data will appear
automatically in just a few seconds.

˂script type="text/javascript"˃
var appinsights=window.appinsights||function(a){
  function b(a){
  var c={config:a},d=document,e=window;
    var b=d.createelement("script");
    var f=["event","exception","metric","pageview","trace","dependency"];
  ) { b("track"+f.pop()); }
    !a.disableexceptiontracking) {
     var g=e[f];
          var i=g&&g(a,b,d,e,h);
    return c


if your website has a master page, you can put it there.
for example in an asp.NET mvc project, put in:

ºdetailed configurationº
- optional parameters that can be set:
  - disable or limit the number of ajax calls reported per page view
   (to reduce traffic).
  - set debug mode to have telemetry move rapidly through the pipeline
    without being batched.

code snippet to set these parameters:
  instrumentationkey: "..."
  enabledebug: boolean,
  disableexceptiontracking: boolean, // don't log browser exceptions.
  disableajaxtracking: boolean, // don't log ajax calls.
  maxajaxcallsperview: 10, // limit ajax calls logged, def: 500
  overridepageviewduration: boolean, // time page load up to execution of first trackpageview().
  accountid: string, // set dynamically for an authenticated user.

→ Azure portal → create application insights resource.
                 application type: choose general.
  → take copy of "instrumentation key"
    (placed in "essentials" drop-down of just created resource)
    → install latest "microsoft.applicationinsights" package.
      → set the instrumentation key in your code before tracking any
        telemetry (or set appinsights_instrumentationkey environment
        after that, you should be able to manually track telemetry
        and see it on the Azure portal: = my_key;
        var telemetryclient = new telemetryclient();
        telemetryclient.tracktrace("hello world!");

        → install latest version of
          "microsoft.applicationinsights.dependencycollector" package
          it automatically tracks http, SQL, or some other external
          dependency calls.

          → you may initialize and configure application insights from
            the code or using applicationinsights.config file.
          Bºinitialization  must happens as early as possibleº

    Rºwarnº:instructions referring to Rºapplicationinsights.configº are
            only applicable to apps that are targeting the .NET framework,
          Rºdo not apply to .NET core applicationsº

          - configuring telemetry collection from code

app start-up)

    var module = new dependencytrackingtelemetrymodule();
        singleton that must be preserved for application lifetime.

    // prevent correlation id to be sent to certain endpoints.
    // you may add other domains as needed.


    // enable known dependency tracking, note that in future versions,
    // we will extend this list. please check default settings in
    module.initialize(configuration); // initialize the module

    // add common telemetry initializers)

    // stamps telemetry with correlation identifiers
      add(new operationcorrelationtelemetryinitializer());

    // ensures proper dependencytelemetry.type is set for Azure restful API calls
      add(new httpdependenciesparsingtelemetryinitializer());

for .NET framework windows app, you may also install and
initialize performance counter collector module.

full example:

using microsoft.applicationinsights;
using microsoft.applicationinsights.dependencycollector;
using microsoft.applicationinsights.extensibility;
using system.NET.http;
using system.threading.tasks;

namespace consoleapp {
    class program {
        static void main(string[] args) {
            telemetryconfiguration configuration =

            configuration.instrumentationkey = "removed";

            var telemetryclient = new telemetryclient();
            using (initializedependencytracking(configuration)) {
                // run app...
                telemetryclient.tracktrace("hello world!");
                using (var httpclient = new httpclient()) {
                    // http dependency is automatically tracked!
            task.delay(5000).wait(); // flush is not-blocking. wait a bit

        static dependencytrackingtelemetrymodule
               telemetryconfiguration configuration) {
            var module = new dependencytrackingtelemetrymodule();
            // prevent correlation id to be sent to certain endpoints. you may add other domains as needed.


            // enable known dependency tracking, note that in future versions, we will extend this list.
            // please check default settings in
            module.initialize(configuration); // initialize the module
            return module;

ºapplication map: triage distributed applicationsº - application map helps youºspot performance bottlenecks or failureº hotspotsºacrossºall components of yourºdistributed applicationº each node on the map represents an application component or its dependencies; and has health kpi and alerts status. you can click through from any component to more detailed diagnostics, such as application insights events. if your app uses Azure services, you can also click through to Azure diagnostics, such as SQL database advisor recommendations. ºcomponentº: - independently deployable part of distributed/microservices application with independent telemetry. - deployed onny number of server/role/container instances. - they can be separate application insights instrumentation keys (even if subscriptions are different) or different roles reporting to a single application insights instrumentation key. ºcomposite application mapº - full application topology across multiple levels of related application components. - components could be different application insights resources, or different roles in a single resource. -ºthe app map finds components by following http dependencyº ºcalls made between servers with the application insights SDKº ºinstalledº - this experience starts with progressive discovery of the components. when you first load the application map, a set of queries are triggered to discover the components related to this component. a button at the top-left corner will update with the number of components in your application as they are discovered. - if all of the components are roles within a single application insights resource, then this discovery step is not required. the initial load for such an application will have all its components. - key objective 1: visualize complex topologies with hundreds of components. - application map uses theºcloud_rolenameºproperty to identify the components on the map (automatically added by application insights SDK to the telemetry emitted by components). to override the default value: using; using microsoft.applicationinsights.extensibility; namespace custominitializer.telemetry { public class mytelemetryinitializer : itelemetryinitializer { public void initialize(itelemetry telemetry) { if (string.isnullorempty( { º = "rolename";º //set custom role } } } } instantiate the initializer in code, ex: ex 1: global.aspx.cs: using microsoft.applicationinsights.extensibility; using custominitializer.telemetry; protected void application_start() { // ... telemetryinitializers. add(new mytelemetryinitializer()); } ex 2: index.JS alt 1) var appinsights = require("applicationinsights"); appinsights.setup('instrumentation_key').start(); appinsights.defaultclient.context.tags[""] = "role name"; appinsights.defaultclient.context.tags[""] = "your role instance"; ex 2: index.JS alt 2) var appinsights = require("applicationinsights"); appinsights.setup('instrumentation_key').start(); appinsights.defaultclient. addtelemetryprocessor(envelope =˃ { envelope.tags[""] = "your role name"; envelope.tags[""] = "your role instance" }); ex 3: client/browser-side javascript appinsights.queue.push(() =˃ { appinsights.context. addtelemetryinitializer((envelope) =˃ { envelope.tags[""] = "your role name"; envelope.tags[""] = "your role instance"; }); }); - multi tile visualizing data from multiple resources across different resource groups and subscriptions how-to: to create a new dashboard: → dashboard pane → "new dashboard" → type "dashboard-name" → add tile from tile gallery by draging → pin charts/.. from application insights to dashboard → add health overview by draging "application insights" tiles → add custom metric chart metrics panel allows to graph a metric collected by application insights over time with optional filters and grouping. to add to the dashboard a little customization is needed first. → select "application insights" resource in home screen. → select metrics (empty chart pre-created) → in add a metric prompt: add a metric to the chart ºoptionally add a filter and a groupingº → select pin to dashboard on the right. ºadd analytics queryº use application insights analytics ºrich query languageº - since Azure applications insights analytics is a separate service, you need to share your dashboard for it to include an analytics query. when you share an Azure dashboard, you publish it as an Azure resource which can make it available to other users and resources. → dashboard screen top → click "share" (keep dashboard name) → select "subscription name" to share the dashboard. → click publish. (dashboard is now available to other services and subscriptions) → (optionally) define specific users access → select your "application insights" resource in home screen. → click "analytics" at the top of screen to open theºanalytics portalº → type next query (return top 10 most requested pages and their request count) ºrequestsº | summarize count() by name | sort by count_ desc | take 10 → click "run" to validate → click "pin icon" and select dashboard through activity logs, it's possible to see: - what operations were taken on the resources in your subscription - who initiated the operation (although operations initiated by a backend service do not return a user as the caller) - when the operation occurred - the status of the operation - the values of other properties that might help to research the operation -ºactivity log contains all write operations (put, post, delete)º performed on your resources. it does not include read operations (get). - useful to find error when troubleshooting or to monitor how a user in your organization modified a resource. - retained for 90 days. (from portal, powershell, Azure cli, insights REST API, or insights .NET library) ºpowershellº $ get-Azurermlog \ -resourcegroup group01 \ -starttime 2015-08-28t06:00 ← if start/end not provided, -endtime 2015-09-10t06:00 last hour is returned. $ get-Azurermlog \ -resourcegroup group01 \ -caller ← only this user -status failed ← filter by status -starttime (get-date).adddays(-14) | ← date funct ("last 14 days") where-object operationname ← filter -eq microsoft.web/sites/stop/action → authorization : → scope : /subscriptions/xxx/resourcegroups/group01/providers/microsoft.web/sites/examplesite → action : microsoft.web/sites/stop/action → role : subscription admin → condition : → caller : → correlationid : 84beae59-92aa-4662-a6fc-b6fecc0ff8da → eventsource : administrative → eventtimestamp : 8/28/2015 4:08:18 pm → operationname : microsoft.web/sites/stop/action → resourcegroupname : group01 → resourceid : /subscriptions/xx/resourcegroups/group01/providers/microsoft.web/sites/examplesite → status : succeeded → subscriptionid : xxxxx → substatus : ok to focus on an output filed: ( ( get-Azurermlog \ -status failed \ -resourcegroup group01 º-detailedoutputº ).properties[1].content["statusmessage"] | convertfrom-JSON ).error → returning: → → code message → ---- ------- → dnsrecordinuse dns record is → already used by another public ip. ºAzure cliº $ az monitor \ activity-log list \ --resource-group $group ºREST APIº - REST operations for working with the activity log are part of the insights REST API. to retrieve activity log events, see list the management events in a subscription. - application insights sends web requests to your application at regular intervals from points around the world. it alerts you if your application doesn't respond, or responds slowly. for any http or https endpoint that accessible from the public internet. - it can be third REST API service on which our app depends. - availability tests types: -ºURL ping testº : simple test (created in Azure portal) -ºmulti-step web testº: (created in v.s. enterprise and uploaded to portal) - up to 100 availability tests per application resource. pre-setup: configure application insights for a web app portal → open "application insights" create a URL ping test: → open "availability" blade and add a test. fill internet-public URL (up to 10 redirects allowed). → parse dependent requests: if checked, test will also requests images, scripts, style files, ... recorded response time includes time taken to get these files. test fails if any resource fails within the timeout. → enable retries: if checked, falied test retried after short interval up to 3 times. (retry is suspended until the next success). → test frequency: default to 5min + 5 test locations (min. 5 locations recomended, up to 16 ). note: """ we have found that optimal configuration is: number of test locations = alert location threshold + 2.""" success criteria: - test timeout not exceeded. - http response is 200 - content match match expected one (plain string, without wildcards) - alert location threshold: minimum of 3/5 locations recomended. ºmulti-step web testsº - test sequence of URLs. - record the scenario by using visual studio enterprise. - upload recording to application insights. - coded functions or loops not allowed. - tests ust be contained completely in the .webtest script. - only english characters supported. update the web test definition file to translate/exclude non-english characters.
integrate caching⅋cdns
Azure cache for redis
-  with Azure cache for redis, fast storage is located in-memory
  instead of being loaded from disk by a database.

- it can also be used as an in-memory data structure
  store, distributed non-relational database, andºmessage brokerº.

- performance improved by taking advantage of the
  low-latency, high-throughput performance of the redis engine.

- redis supports a variety of data types all oriented
  around binary safe strings.
  - binary-safe strings (most common)
  - lists of strings
  - unordered sets of strings
  - hashes
  - sorted sets of strings
  - maps of strings
- redis works best with smaller values (100k or less)
  Bº:consider chopping up bigger data into multiple keysº

-ºeach data value is associated to a keyºwhich can be used
  to lookup the value from the cache.
- up to 500 mb values are possible, but increases
  network latency and Rºcan cause caching and out-of-memory issuesº
Rºif cache isn't configured to expire old valuesº

- redis keys: binary safe strings.
  - guidelines for choosing keys:
    - avoid long keys. they take up more memory and require longer
      lookup times because they have to be compared byte-by-byte.
    - prefer hash of big keys to big keys themself.
    - maximum size: 512 mb, but much smaller must be used.
    - prefer keys like "sport:football;date:2008-02-02" to
      "fb:8-2-2". extra size and performance difference is

- data in redisºis stored in nodes and clustersº
-ºclusters are sets of three or more nodesºyour dataset is split

- redis caching architectures?
  -ºsingle node  º:
  -ºmultiple nodeº:
  -ºclustered    º:

- redis caching architectures areºsplit across Azure by (pricing)tiers:º
  -ºbasic cacheº: single node redis cache.
    complete dataset stored in a single node.
    (development, testing, non-critical workloads)
    - up toº53 gbºof memory andº20_000 simul. connectionsº
    - no sla for this service tier.
  -ºstandard cacheº: multiple node
    redis replicates a cache in a two-node primary/secondary
    configuration. Azure manages the replication
    between the two nodes. ºproduction-readyºcache with
    - up toº53 gbºof memory andº20_000 simul. connectionsº
    - sla: 99.99%.
  -ºpremium tierº:  standard tier + ability to persist data,
    take snapshots, and back up data.
    - it also supports an a.virtual network to give
      complete control over your connections, subnets, ip addressing, and
      network isolation.
    - it alsoºincludes geo-replication,ºensuring that
      data is close to the consumming app.
    - up toº530 gbºof memory andº40_000 simul. connectionsº
    - sla: 99.99%.
    - disaster recovery persist data type:
      -ºrdb persistenceº: periodic snapshots, can rebuild cache
                          using the snapshot in case of failure.
      -ºaof persistenceº: saves every write operation to a log
                          at least once per second. it creates
                          bigger files than rdb but has less data loss.
    - clustering support with up to 10 different shards.
      cost: cost of the original node x number of shards.

options: Azure portal, the Azure cli, or Azure powershell.

-ºnameº: globally unique and used to generate a public-facing
         URL to connect and communicate with the service.
        (1 and 63 chars).
-ºresource groupº:
      Bºmanaged resource and needs a resource group ownerº
-ºlocationº: as close to the data consumer as possible)

-ºamount of cache memoryºavailable on each (pricing) tier -
  is selected by choosing a cache level:
  - ºc0 to c6ºfor basic/standard
     c0 really meant for simple dev
     (shared cpu core and very little memory)
  - ºp0 to p4ºfor premium.

ºaccessing the redis instanceº

a command is typically issued as:
command parameter1 parameter2 parameter3

common commands include:
│command          │ description                                                          │
│ping             │ping the server. returns "pong".                                      │
│set [key] [value]│sets a key/value in the cache. returns "ok" on success.               │
│get [key]        │gets a value from the cache.                                          │
│exists [key]     │returns '1' if the key exists in the cache, '0' if it doesn't.        │
│type [key]       │returns the type associated to the value for the given key.           │
│incr [key]       │increment the given value associated with key by '1'. the             │
│                 │value must be an integer or double value. this returns the new value. │
│incrby           │ increment the given value associated with key by the specified amount│
│   [key] [amount]│ the value must be an integer or double value.                        │
│                 │ this returns the new value.                                          │
│del [key]        │ deletes the value associated with the key.                           │
│flushdb          │ delete all keys and values in the database.                          │

-ºredis has a command-line tool (redis-cli)º: ex:
  ˃ set somekey somevalue     ˃ set counter 100
  ok                          ok
  ˃ get somekey               ˃ incr counter
  "somevalue"                 (integer) 101
  ˃ exists somekey            ˃ incrby counter 50
  (string) 1                  (integer) 151
  ˃ del somekey               ˃ type counter
  (string) 1                  (integer)
  ˃ exists somekey
  (string) 0

-ºadding an expiration time to valuesº
 key time to live (ttl)

when the ttl elapses, key is automatically deleted.
some notes on ttl expirations.
  - expirations can be set using seconds or milliseconds precision.
  - the expire time resolution is always 1 millisecond.
  - information about expires are replicated and persisted on disk,
    the time virtually passes when your redis server remains stopped
(this means that redis saves the date at which a key will expire).

example of an expiration:
˃ set counter 100
˃ºexpire counter 5º
(integer) 1
˃ get counter
... wait ...
˃ get counter

ºaccessing a redis cache from a clientº
- clients need:
  - host name, port, and an access key for the cache
    ( Azure portal → settings → access keys page)
    note: Azure also offers a connection string for some redis
    clients which bundles this data together into a single
  - there are two keys created: primary and secondary.
    you can use both. secondary used in  case you need
    to change primary one:
    - switch all clients to the secondary key.
    - regenerate the primary key blocking any app still
      using original primary one.
    - microsoft recommends periodically regenerating the keys

- typically a client app will use a client library.
  a popular high-performance redis client for the .NET language is
  stackexchange.redis nuget package.

 ºconnection stringºwill look  something like:
  (it should be protected. consider using an Azure key vault)
 ºsslº: ensures that communication is encrypted. ──────────────┘        │
 ºabortconnectionº:  allows a connection to be created even if the ─────┘
                    server is unavailable at that moment.

- c# stackexchange.redis:

   using stackexchange.redis;
   var connectionstring = "[cache-name],"+
intended to be kept around while you need access to the cache.
   varºredisconnectionº= connectionmultiplexer.connect(connectionstring);
   //  ^^^^^^^^^^^^^^^ intended to be reused   ^^^^^^ async also supporte
   // ºredisconnectionº can now be used for:
   //  - accessing redis database:
   //  - use the publisher/subscript features of redis.
   //  - access an individual server for maintenance or monitoring
   //    purposes.
   idatabase DB = redisconnection.getdatabase();
   //        ^^ lightweight object. no need to store.
   bool wasset = DB.stringset("favorite:flavor", "i-love-rocky-road");
   //  bool indicates whether the value was set (true) or not (false).
   string value = DB.stringget("favorite:flavor");
   byte[] key = ...;
   byte[] value = ...;
   DB.stringset(key, value);
   byte[] value1 = DB.stringget(key);

- idatabase interface includes several other methods to work with
  hashes, lists, sets, and ordered sets.
  more common ones work with single keys.
  │method           │description                                                    │
  │createbatch      │ creates a group of operations to be sent to the server        │
  │                 │ as a single unit, but not necessarily processed as a unit.    │
  │createtransaction│ creates a group of operations to be sent to the server        │
  │                 │ as a single unit and processed on the server as a single unit.│
  │keydelete        │ delete the key/value.                                         │
  │keyexists        │ returns whether the given key exists in cache.                │
  │keyexpire        │ sets a time─to─live (ttl) expiration on a key.                │
  │keyrename        │ renames a key.                                                │
  │keytimetolive    │ returns the ttl for a key.                                    │
  │keytype          │ returns the string representation of the type of the value    │
  │                 │ stored at key. the different types that can be returned are:  │
  │                 │ string, list, set, zset and hash.                             │

ºexecuting other commandsº
- idatabase instances can use execute and executeasync
  to pass textual commands to the redis server.
  var result = DB.execute("ping");
  //  ^^^^^^
  // properties:
  // -ºtypeº    : "string", "integer", ...
  // -ºisnullº  : true/false
  // -ºtostringº: actual return value.
  console.writeline(result.tostring()); // displays: "pong"

  ex 2:get all the clients connected to the cache ("client list"):
  var result = await DB.executeasync("client", "list");
  console.writeline($"type = {result.type}\r\nresult = {result}");
  → type = bulkstring
  → result = id=9469 addr= fd=18 name=desktop-aaaaaa
  → age=0 idle=0 flags=n DB=0 sub=1 psub=0 multi=-1 qbuf=0 qbuf-free=0
  → obl=0 oll=0 omem=0 ow=0 owmem=0 events=r cmd=subscribe numops=5
  → id=9470 addr= fd=13 name=desktop-bbbbbb age=0
  → idle=0 flags=n DB=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768
  → obl=0 oll=0 omem=0 ow=0 owmem=0 events=r cmd=client numops=17

ºstoring more complex valuesº
- you can cache off object graphs by serializing them to a textual
  format - typically xml or JSON.
  public class gamestat {
      public string id { get; set; }
      public string sport { get; set; }
      public datetimeoffset dateplayed { get; set; }
      public string game { get; set; }
      public ireadonlylist˂string˃ teams { get; set; }
      public ireadonlylist˂(string team, int score)˃ results { get; set; }

  we could use the newtonsoft.JSON library to turn an instance of this
  object into a string:
  var stat = new gamestat(...);
  string serializedvalue = ºnewtonsoft.JSON.JSONconvert.serializeobject(stat);º
  bool added = DB.stringset("key1", serializedvalue);
  var result = DB.stringget("key1");
  var stat = newtonsoft.JSON.JSONconvert.
  console.writeline(; // displays "soccer"

ºcleaning up the connectionº
 redisconnection = null;

develop for storage on cdns
cdns store cached content on edge servers that are close to users
to minimize latency.  these edge servers are located in point of
presence (pop) locations that are distributed throughout the globe.

 using Azure cdn, you can cache publicly available objects loaded
ºfrom Azure BLOB storage, a web application, a virtual machine,º
ºor any publicly accessible web server.º
 Azure cdnºcan also accelerate dynamic content, which cannot º
ºbe cached, by taking advantage of various network optimizations byº
ºusing cdn pops.º an example is using route optimization to bypass
 border gateway protocol (bgp).

domain name system (dns) routes the request to
the best-performing pop location, usually the one
geographically closest to the user.

 $ az cdn profile list \ ← list all of your existing cdn profiles
                          associated with your subscription.
   --resource-group ..   ← optional. filter by resource group

 $ az cdn profile create \  ← step 1) create a new profile
   --name demoprofile \
   --resource-group examplegroup
   by default, standard tier is used
   and theºakamai providerº. this
   can be customized wit "--sku"
   followed by next options:

  $ az cdn endpoint create \    ← step 2) create an endpoint.
    --name contosoendpoint \
    --origin \
    --profile-name demoprofile \← profile needed (step 1)
    --resource-group group01    ← resource-group needed

  $ az cdn custom-domain create \    ← assign custom domain to cdn endpoint to
    --name filesdomain                 ensure that users see only choosen domains
   --hostname \      (instead of the Azure cdn domains)
   --endpoint-name contosoendpoint \
   --profile-name demoprofile \
   --resource-group group01

- to save time and bandwidth consumption, a cached resource
  is not compared to the version on the origin server every time it is
  accessed. instead, as long as a cached resource is considered to be
  fresh, it is assumed to be the most current version and is sent
  directly to the client.
- a cached resource is considered to be fresh
  when its age is less than the age or period defined by a cache
  setting. for example, when a browser reloads a webpage, it verifies
  that each cached resource on your hard drive is fresh and loads it.
  if the resource is not fresh (stale), an up-to-date copy is loaded
  from the server.

ºcaching rulesº
- cache expiration duration in days, hours, minutes, and seconds
  can be set.
- Azure cdn caching rules type:
  -ºglobal caching rulesº: you can set
    1 global caching rule ←→ for each endpoint in a profile
    itaffects all requests to the endpoint.
    global caching rule overrides any http cache-directive
    headers, if set.
  -ºcustom caching rulesº:
    1+ custom caching rule ←→ for each endpoint in a profile
    they match specific paths and file extensions;
    processed in order
    override the global caching rule, if set.

ºpurging and preloading assets by using the Azure cliº
ºpurgeº unpublish cached assets from an endpoint.
        very useful if you have an application scenario
        where a large amount of data is invalidated and
        should be updated in the cache.

  $ az cdn endpoint purge \
    --content-paths '/css/*' '/JS/app.JS' \ ← file path/wildcard dir/both
    --name contosoendpoint \
    --profile-name demoprofile \
    --resource-group group01

  $ az cdn endpoint load \                    ← preload assets
    --content-paths '/img/*' '/JS/module.JS'  \ (improve user experience
    --name contosoendpoint \                    by prepopulating the cache)
    --profile-name demoprofile
    --resource-group group01

Decoupled system integration. Events and Message queues
(App Service) Logic App
- (Mule|Cammel) alike Azure "ESB"
  for processing/routing messages
    - FTP → A.Storage
    - events to email
    - (Monitored)tweets subject → analyze sentiment → alert/task
- (e)nterprise (a)pplication (i)ntegration and
  business-to-business (B2B) communication
- app/data/system integration.
- on premises support.
- Design and build with A.Portal ºLogic App Designerº
 (and Enterprise Integration Pack for B2B)
- manage with PowerShell.

- Logic App Data Journey:
 → ºspecific event triggerº  ← - data match criteria with (Opt) basic scheduling
   → Logic App Engine
     → ºnew logic app instanceº
        (workflow start)
       → data conversions
         flow controls (conditional|switch|loops|branching)

- 200+ connectors:  A.Service Bus, Functions,  Storage, SQL,
                    Office 365, Dynamics, BizTalk, Salesforce, SAP,
                    Oracle DB, file shares, ...

- Connector  : -ºTriggersº:
  Components     - Allow to notify of specific events to
                   Logic Apps or ºFlowsº
                   Ex: FTP_connectorºOnUpdatedFileº
                 - TYPES: ºPollingº (at intervals) for new data.
                          ºPushº  , listening for data on an endpoint.

               -ºActionsº :
                 - changes directed by a user (Ex: CRUD SQL action)
                 - BºAll actions map to Swagger operationsº

- Allow to create data-flows in "real time"

- available as:
  -ºBuilt-insº:  - create data-flows on custom schedules
   ºConnectorsº  - communicate with endpoints
                 - receive+respond to requests
                 - call:
                   - A.functions
                   - A.API Apps (Web Apps)
                   - managed APIs
                   - nested logic apps

  -ºManagedº :   - Allow to access other services and systems.
   ºConnectorsº  - Two managed connector groups:
                   ├ ºManaged API connectorsº:
                   │ - use BLOB Storage, Office 365, Dynamics, Power BI,
                   │   OneDrive, Salesforce, SharePoint Online, ...
                   └ ºOn─premises connectorsº:
                     - Require PRE-SETUP by installing
                       ºon-premises data gatewayº
                     - In/Out data form SQL Server, SharePoint,
                       Oracle DB, file shares, ...

                 -ºEnterprise connectorsº:
                   - SAP, IBM MQ, ... forRºan additional costº
                   - Encryption/e-signatures supported

                 -ºEnterprise Integration Pack and Connectorsº
                   - Makes use of B2B scenarios,
                     similar to BizTalk Server.
                   - validate/transform XML, encode/decode flat files,
                     processºB2B messages with AS2, EDIFACT, and X12 protocolsº.
                   -ºArchitecturallyº based on ºintegration accountsº:
                     ºcloud-based containers that store all artifactsº
                     º(schemas, partners, certificates, maps, and agreements)º
                     to design/deploy/maintain logic-apps Non-logic app B2B apps
                   - Logic app integration PRE-SETUP:
                     Enable permissions in integration-account to allow
                     logic app instance/s access to its artifacts.
                   - Rºintegration account is billed appartº
                     to simplify the storage and
                     management of artifacts used in B2B communications.
                   - High-level steps:
                     Create an  → Add partners, → Create a ──┐
                     integration  schemas,        Logic App  │
                     account in   certificates,              │
                     A.Portal     maps,                      │
                                  agreements                 │
                                  to int.acct.               │
                 └─→ Link Logic  → In the Log.App
                     App to the    use the partners,
                     Integration   schemas,  certificates
                     account       and agreements
                                   stored in the

Logic Apps in Visual Studio - create workflows integrating apps, data, systems, services across enterprises and organizations. - Over A.Portal tool, Visual Studio also allows to add logic-apps to git, publish different versions, and create A.Resource Manager templates for different deployment environments. ºEx App:º Website's RSS feed .... → sends email for each new item. PREREQUISITES - Azure subscription - Visual Studio 2015+ - Community edition - Microsoft Azure SDK for .NET 2.9.1+ - Azure PowerShell. - An email account that's supported by Logic Apps, (Office 365 Outlook,, Gmail, ...?). - Access to the web while using the embedded Logic App Designer - Internet connection to create resources in Azure and to read the properties and data from connectors in logic app. ºCreate an Azure Resource Group projectº - Start Visual Studio → sign-in with A.account. → File menu → New ˃ Project → Under Installed, select Visual C#. → Select Cloud ˃ Azure Resource Group. Fill in project name: → Select the Logic App template. (After project is created, Solution Explorer opens and shows your solution) → In your solution, the LogicApp.JSON file not only stores the definition for your logic app but is also an Azure Resource Manager template that you can set up for deployment. - Create a blank Logic App: Solution Explorer → open shortcut menu for "LogicApp.JSON" file. → Select "Open With Logic App Designer" → Open logic app .JSON file with Logic App Designer Select your Subcription. For Resource Group: select "Create New..." Select resource location
(Visual)L.Apps Designer - available in A.Portal and Visual Studio. - For customized logic apps, logic app definitions can be created in JSON. (code view mode). - A.PowerShell commands and Azure Resource Manager templates can be used to select tasks. ºCreating the triggerº) NOTE: Every logic app must start with a trigger Logic App Designer → enter "rss" in search box. → Select trigger "When a feed item is published Build your logic app by adding a trigger and actions" → Provide next for the trigger: (RSS trigger appears in Logic App Designer) Property Value RSS feed URL Interval 1 Frequency Minute ºAdding an actionº) Under "When a feed item is published trigger..." select " + New step → Add an action " → Select "send an email" from provider ctions list → Select action: "Office 365 Outlook - Send an email" - Fill in/sign-in credentials if requested. - Fill in data to include in email. To: address Subject: .... Add dynamic content list → select "Feed title" NOTE: A "For each" loop automatically appears on the designer, if a token for an array was selected. Body: ....
Deploy Solution Explorer → project's shortcut menu → select Deploy ˃ New → Create logic app deployment → When deployment starts, status appears in Visual Studio Output window. (open "Show output from list", and select an Azure resource group if not visible)
custom connectors

- Custom connectors are function-based:
  specific functions called in underlying service
  and data returned as response.

- To create a custon connector in Logic Apps :
1) create custom connector
   → Portal → "New" → Search "logic apps connector"
     → Create. Fill in details:
       ( Name, Subscription, Resource Group, Location)
2) define behavior of the connector using an
   OpenAPI definition or a Postman collection
   - OpenAPI definition size must be than 1 MB.
   - One of the following subscriptions:
     - Azure, if using Logic Apps
     - Microsoft Flow
     - PowerApps
  ºUploading OpenAPIº
   portal → open "Logic Apps connector" of step 1)
    → connector's menu → choose "Logic Apps Connector"
      → Click "Edit"
        → Under "General", choose "Upload an OpenAPI file",
          (From this point, we'll show the Microsoft Flow UI,
           but steps are largely the same across all three technologies)
          →  review information imported from OpenAPI definition.
             (API host, base URL,...)
             "info": {
               "version": "1.0.0",
               "title": "SentimentDemo",
               "description": "Uses Cognitive Services Text ... "
             "host": "",
             "basePath": "/",
             "schemes": [
             "securityDefinitions": {
               "api_key": {         ← ºAPI_key Review authentication typeº
                 "type": "apiKey",     (Other Auth. methods could apply)
                 "in": "header",
                 "name": "Ocp-Apim-Subscription-Key"
Custom templates
- Logic App deployment template overview
  - three basic components compose the Logic App:
    -ºLogic app resourceº : Info. about pricing plan, location, ...

    -ºWorkflow definitionº: workflow steps and how the Logic Apps
                            engine should execute them.

    -ºConnectionsº        : Refers to separate resources that securely store
                            metadata about any connector connections, such as
                            a connection-string and an access-token.

- Create a Logic App deployment template:
  easiest way: use the Visual Studio Tools for Logic Apps.
  other tools:
   - author by hand.
   - logic-app template creatorºPowerShell moduleº.
     It first evaluates the logic app and any connections
     that it is using, and then generates template resources
     with the necessary parameters for deployment.
     Installation via theºPowerShell Galleryº:
     $ Install-Module -Name LogicAppTemplate
     - To make module work with any tenant/subscription
       access token, it is recommended to use it with the
      ºARMClient command-line toolº (ARM: Azure Resource Manager)

   - To generate a template with PowerShell armclient:

     $ armclient token $SubscriptionId | \     ← get access token and pipe to PowerShell script
       Get-LogicAppTemplate -LogicApp MyApp \
       -ResourceGroup MyRG \
       -SubscriptionId $SubscriptionId \
       -Verbose |Out-File  ºG/home/myuser/template.JSONº

   - Note: You can use Bºlogic app trigger|actions parametersº in:
     - Child workflow
     - Function app
     - APIM call
     - API connection runtime URL
     - API connection path

   - Ex: Parameterized A.Function Bºresource IDº:

     Qº"functionName":º{                  // Defining the parameter

     "MyFunction": {                     // Using te parameter
       "type": "Function",
       "inputs": {

     Ex2: parameterize Service Bus Send message operation:
     "Send_message": {
     · "type": "ApiConnection",
     · "inputs": {
     · · "host"    : {
     · ·               "connection": {
     · ·                 "name":
     · ·               "@Bºparameters('$connections')º['servicebus']['connectionId']"
     · ·               }
     · ·             },
     · · "method"  : "post",
     · · "path"    :
     · ·             "[concat(
     · ·                '/@{encodeURIComponent
     · ·                    (
     · ·                      ''',
     · ·                    Bºparameters('queueuname')º,
     · ·                      '''
     · ·                    )}/messages')
     · ·             ]",
     · · "body"   : { "ContentData": "@{base64(triggerBody())}" },
     · · "queries": { "systemProperties": "None" }
     · },
     · "runAfter": {}
     Note: host.runtimeUrl is optional, can be removed from template if present.

     Logic App Designer will need default parameters values to work properly.
     "parameters": {
         "IntegrationAccount": {

Adding to Resource Group - Edit "template.JSON", then select "View" → Other Windows → JSON Outline To add a resource to the template file: → Add resource like: · - alt1: click "Add Resource" (top of JSON Outline window) · - alt2: JSON Outline window → right-click "resources" → select "Add New Resource". └ → find and select Logic App in "dialog box", · Fill in Logic App Name and click "Add" · └ → Deploy logic-app-template: · STEP 1) create parameter file with params values. · STEP 2) Deploy alternatives: · - PowerShell · - REST API · - Visual Studio Team Services Release Management · - A.Portal → template deployment. · NOTE: logic-app will works OK with valid · parameters after deployment · └ → Authorize OAuth connections: Logic Apps Designer → Open Logic-app → authorize connections. For automated deployment, you can use a script to consent to each OAuth connection. (See example script on GitHub under LogicAppConnectionAuth project)
Event Grid
Event Grid Overview
- Specalized Queue-like for Azure Objects Events
  (File Shares, BLOG, storage ...)
  Custom events also allowed.

1st) select the resource you would like to subscribe to.
2nd) provide event handler | WebHook endpoint to send the event to.

You can use filters to route specific events to different endpoints,
multicast to multiple endpoints, and make sure your events are
reliably delivered.

 └ Events   : What happened.
                common info: source, time, unique id.
              + specific event type info.
                max size:: 64 KB of data.
   event schema:
   [   ← Grid sends the events to subscribers in an array that has a
   {     single event. (This may change in a future 2020-01).
     "topic"    : string,      ← full resource path the event source.
                                 read-value provided by source
     "subject"  : string,      ← publisher-defined path to the event subject
     "id"       : string,
     "eventType": string,
     "eventTime": string,      ← provider's utc time.
     "dataVersion": string,    ← schema version of data object defined by publisher
     "metadataVersion": string ← Provided by event grid
     "data"     : { ... },     ← resource provider specific.
   }                             top-level data should have same fields as standard
   ]                             Azure resource-defined events.

   · {
   ·   "topic"     : "/subscriptions/"id"/resourcegroups/storage"
   ·   "subject"   : "/BLOBservices/default/containers/"containerId"/"
   ·   "eventtype" : "",
   ·   "eventtime" : "2017-06-26t18:41:00.9584103z",
   ·   "id"        : "831e1650-001e-001b-66ab-eeb76e069631",
   ·   "data"      :
   ·      "API"            : "putblocklist",
   ·      "clientrequestid": "6d79dbfb-0e37-4fc4-981f-442c9ca65760",
   ·      "requestid"      : "831e1650-001e-001b-66ab-eeb76e000000",
   ·      "etag"           : "0x8d4bcc2e4835cd0",
   ·      "contenttype"    : "application/octet-stream",
   ·      "contentlength"  : 524288,
   ·      "BLOBtype"       : "blockBLOB",
   ·      "URL"            : ""
   ·      "sequencer"      : "00000000000004420000000000028963",
   ·      "storagediagnostics": {
   ·        "batchid": "b68529f3-68cd-4744-baa4-3c0498ec19f0"
   ·      }
   ·    },
   ·   "dataversion": "",
   ·   "metadataversion": "1"
   · }

 └ Event    : Where the event took place.

 └ Topics   : - collection-of-related-events.
                |Publisher| →(writes to)→ |Topic|→(push to) →|Subscribers|

              - System topics: built-in topics .
                App Developers can not see them but can subscribe to them.

              - Custom topics: application⅋third-party topics.

 └ Event    : - endpoint or built-in mechanism to route events to 1+ handlers.
   subscrip-    Also used by handlers to intelligently filter incoming events.
   tion       - It tells the Event Grid which events on a topic
                you're interested in receiving.
                Filters (on event type, subject pattern) can be set to
                different endpoins.

 └ Event    : - Azure functions
   handlers   - Logic Apps
              - Azure Automation
              - WebHook
              - Queue Storage:  (retried until queue-push consumes the message)
              - Hybrid Connections
              - Event Hubs
              - Custom webhook: (retried until handler returns HTTP 200 - OK)

 └ Incomming Events to Event Grid are stored in an array,
   whose total size is up to 1 MB. containing 1+ event objects,
   with each event limited to 64 KB.

Security⅋Authentication Bºauthentication typesº º└ºwebhook event delivery: - event grid requires "you" to prove ownership of target (subscriber) webhook - Automatically handled for: - A.logic apps with event grid connector - A.automation via webhook - A.functions with event grid trigger - For other uses casesºvalidation handshakeºis needed. -ºvalidationcode handshakeº(programmatic): · Recomended when in control of endpoint source code. · At event-subscription-creation time, event grid POSTs · a subscription validation event to your endpoint. · The data portion of this event includes a · validationcode property. Handler verifies that · validation request is for an expected event subscription, · and echoes the validation code to event grid. · (supported in all event grid versions). -ºvalidationurl handshakeº(manual, 2018-05-01-preview+): (non-controlled) third-party service. - event grid POST a validationurl property in the data portion of the subscription validation event. - to complete the handshake, find that URL in the event data and manually send a get request to it. (URL expires in 10 minutes) º└ºevent subscriptions: º└ºcustom topic publishing: "aeg-event-type: subscriptionValidation" eventtype : microsoft.eventgrid.subscriptionValidationEvent data : { ... validationcode : random string validationurl : URL-for-manual-validation } ^ API version 2018-05-01-preview+ BºEvent delivery securityº: └ webhook subscriber endpoint can add query parameters to the webhook URL at event subscription: - Set one of them to be a secret for extra security. Then, programatically reject non-matching secrets on the Webhook. Note: Use --include-full-endpoint-URL in Azure cli to show query params. └ For other event handlers appart of webhook write access is needed. º"microsoft.eventgrid/eventsubscriptions/write"º must be true. Required resource scope differs based on: -ºsystem topicº: scope of resource publishing the event. The format of the resource is: /subscriptions/{subscrip.-id}/resourcegroups /{resource-group-name}/providers /{resource-provider}/{resource-type}/{resource-name} Ex: subscribe to events on "myacct" storage account: permission:º"microsoft.eventgrid/eventsubscriptions/write"º scope : /subscriptions/####/resourcegroups /testrg/providers / -ºcustom topicº: scope of the event grid topic. Format of the resource is: /subscriptions/{subscription-id}/resourcegroups /{resource-group-name}/providers /microsoft.eventgrid/topics/{topic-name} Ex: subscribe to "mytopic" custom topic permission:º"microsoft.eventgrid/eventsubscriptions/write"º scope : /subscriptions/####/resourcegroups /testrg/providers /microsoft.eventgrid/topics/mytopic ºcustom topic publishingº - authentication value included in http header. - custom topics use either: -ºshared access signatureº(SAS), recommended. HTTP header: - "aeg-sas-token: r={resource}⅋e={expiration}⅋s={signature}" └───┬───┘ path for event-grid topic. Ex: https://$topic.$region.eventgrid.Azure.NET/eventgrid/API/events -ºkey authenticationº: simple programming and is compatible with many existing webhook publishers. HTTP header: - "aeg-sas-key : $key" - Ex: creates SAS (C#): static string buildSharedAccessSignature( string resource, datetime expirationutc, string key) { const char resource = 'r' , expiration = 'e' , signature = 's'; string encodedresource = httpUtility.URLencode(resource); var culture = cultureinfo.createSpecificCulture("en-us"); var encodedexpirationutc = httputility.URLencode(expirationutc.tostring(culture)); string unsignedsas = $"{resource}={encodedresource}⅋{expiration}={encodedexpirationutc}"; using (var hmac = new hmacsha256(convert.frombase64string(key))) { string signature = convert.tobase64string( hmac.computehash(encoding.utf8.getbytes(unsignedsas)) ); string encodedsignature = httputility.URLencode(signature); string signedsas = $"{unsignedsas}⅋{signature}={encodedsignature}"; return signedsas; } } - event GridºRBAC support by user supported actionsº: - microsoft.eventgrid/*/read - microsoft.eventgrid/*/write - microsoft.eventgrid/*/delete - microsoft.eventgrid/eventsubscriptions ┐ potentially /getFullURL/action ├─ secret ─ microsoft.eventgrid/topics/listkeys/action │ info - microsoft.eventgrid/topics/regeneratekey/action ┘
Event filtering for subscriptions - filtering Options: - event types - subject begins with or ends with - advanced fields and operators "filter": { ← ºEvent type filterº: "includedeventtypes": [ "microsoft.resources.resourcewritefailure", "microsoft.resources.resourcewritesuccess" ] } "filter": { ← ºsubject filteringº "subjectbeginswith": "/BLOBservices/default/containers/" "mycontainer/log", "subjectendswith": ".jpg" } "filter": { ← ºadvanced filteringº JSON Example: "advancedfilters": [ { "operatortype": "numberGreaterThanOrEquals", *1 "key" : "data.key1", *2 "value" : 5 }, { "operatortype": "stringContains", *1 "key" : "subject", *2 "values" : ["container1", "container2"] } ] } *1: OPERATOR TYPE NUMBER OPERATORS STRING OPERATORS boolean OPERATOR ------------------------- ---------------- ---------------- numbergreaterthan stringcontains boolequals numbergreaterthanorequals stringbeginswith numberlessthan stringendswith numberlessthanorequals stringin numberin stringnotin numbernotin ^ case-INsensitive *2: AVAILABLE KEYS EVENT GRID SCHEMA CLOUD EVENTS SCHEMA ------------------ ------------------- id eventid topic source subject eventtype eventtype eventtypeversion dataversion event data event data ^ ^ For custom input schema, use the event data fields (ex: data.key1). values can be: number | string | boolean | array RºLimits:º - five advanced filters per event grid subscription same key can be used in 1+ filters. - 512 characters per string value - five values for in and not in operators - the key can only have one level of nesting (like data.key1) - custom event schemas can be filtered only on top-level fields
Create custom events $º$ az group create \ º← create resource group $º --name group01 \ º $º --location westus2 º $º$ az provider register \ º← enable (if not yet done) $º --namespace microsoft.eventgrid º event grid resource provider. $º º (Wait a moment) $º$ az provider show \ º← Check status $º --namespace microsoft.eventgrid \ º $º --query º $º(Expected output) º $º→ "registrationstate" º $º$ az eventgrid topic create \ º← create custom topic $º --name $topicname \ º to posts events to $º -l westus2 -g group01 º $º$ TURI="" º $º$ TURI="${TURI}/Azure-samples" º $º$ TURI="${TURI}/Azure-event-grid-viewer" º $º$ TURI="${TURI}/master/Azuredeploy.JSON" º $º$ az group deployment create \ º← deploy (test) web-app $º --resource-group group01 \ º message endpoint $º --template-uri TURI \ º $º --parameters sitename=$SITENAME \º ← Destination endpoint $º hostingplanname=viewerhost º $º (wait some minutes) º ( Test deployed web-app navigating to: https://$ ) $º$ az eventgrid \ º← subscribe to topic $º event-subscription create \ º $º -g group01 \ º $º --topic-name $topicname \ º $º --name demoviewersub \ º $º --endpoint $endpoint º← endpoint must include the suffix /API/updates/ └─────┬─────┘ https://$SITENAME.Azurewebsites.NET/API/updates/← Ex┘ └───────────┘ (test that initial subscription-validation event has been sent to end-point.) ºUssageº: - let's trigger an event to see how event grid distributes the message to your endpoint. $º$ endpoint=$(az eventgrid \ º← get URL for custom topic. $º topic show \ º $º --name $topicname \ º $º -g group01 \ º $º --query "endpoint" \º $º --output tsv) º $º$ key=$(az eventgrid \ º← get key for custom topic. $º topic key list \ º $º --name $topicname \ º $º -g group01 \ º $º --query "key1" \ º $º --output tsv) º sample event data: $º$ d01=$(date +%y-%m-%dt%h:%m:%s%z)º $º$ event=$(cat <
"Kafka" Event hub
Event hub Overview
- Sort of "managed kafka cluster".
  with capture, auto-inflate, and geo-disaster recovery.

- Use-case: ingests and process of Big-Data events,
  with low latency and high reliability.

- namespace: unique scoping container, referenced by its
            fully qualified domain name.
            - 1+ event hubs or kafka topics.

Bºevent hubs for apache kafkaº:
  - event hubs using HTTPS, AMQP 1.0, kafka 1.0+
  - It allow applications like "mirror maker" or
    framework like "kafka connect" to work clusterless
    with just configuration changes.

ºevent publisher(producer)º
- shared access signature (SAS) token used to identify to the event
- Publisher can have a unique identity, or use a common sas token.
- typically, There is a publisher per client/client-group.

ºpublishing an eventº
 -event data contains:
  - offset
  - sequence number
  - body
  - user properties
  - system properties

- .NET client libraries and classes exists.
- For other platforms you can use any AMQP 1.0 client
  (apache qpid,...).
- events can be published individually or batched.
- a single publication (event data instance) has aºlimit of 1 MBº.
  regardless ofºwhether it is a single event or a batchº
-Bºbest practiceº: publishers must beºunaware of partitionsºwithin
   the event hub and toºonly specify a partition keyº, or their
   identity via their SAS token.
   - while partitions are identifiable and can be sent to directly,
     sending directly to it is not recommended with
     higher level constructs recommended.

- AMQP requires the establishment of a persistent bidirectional socket
  in addition to transport level security (tls) or ssl/tls.
  It has higher network costs when initializing the session.
  It has higher performance (than HTTPS) for frequent publishers.
- https requires additional ssl overhead for every request.
  image showing the interaction of partition keys and event hub.

-Bºall events sharing a partition key value are delivered in orderº,
 Bºand to the same partitionº.
-  If partition keys are used with publisher policies, then the
   identity of the publisher and the value of the partition key must

- event hubs enables granular control over event publishers through
  publisher policies:
  run-time feature designed to facilitate large numbers of
  independent event publishers:
  - each publisher uses its own unique identifier
    when publishing events to an event hub like:

    //[my namespace][event hubname]
    /publishers/[my publisher name]
         no need to create publisher names ahead of time
         but they must match the sas token used when publishing an event
         in order to ensure independent publisher identities.

BºEvent hubs captureº:
  └ automatically capture the streaming data in event hubs and
    save it to:
    - BLOB storage account
    - Azure data lake.
  └ Tunnable Setting (in A.portal):
    - minimum size
    - time window

BºLOG partitionº
  └ ordered sequence of events held in an event hub.
    (similar to a "commit log")
  └ event hubs provides message streaming through a
   ºpartitioned consumer patternº in which each consumer only reads
   a specific subset, or partition, of the message stream.
  └ It provides for horizontal scale and other stream-focused features
  Bºunavailable in queues and topicsº
  └ event hubs Rºretains data for a configured retention timeº applying
  Rºto all partitionsº:
    -Bºevents expire on a time basis; you cannot explicitly delete themº

  └ partitions are independent and grow at different rates.

  └ the number of partitions is set at (hub?) creation:
    must be in the range [2, 32]
    (contact event hubs team to pass the 32 limit).

Bºnumber of partitions in an event hub directly relates to the numberº
Bºof concurrent readers expected.º

-ºPartition keyº
  - sender-supplied value.
  - Can be used to map incoming event data into specific partitions
    through a static Bºhashing function returning the final partitionº
    -Bºif partition key is not provided, round-robin is usedº

  - Examples of partition keys could be a per-device or user unique identity

- SAS: shared access signatures available namespace|event-hub level.
  - a sas token is generated from a sas key as an SHA(URL).
- using (key-name ("policy"), token) event hubs can regenerate
  the hash to authenticate sender.
- sas tokens for event publishers are created with only send
  privileges on a specific event hub.

- entity reading event data from an event hub
- all consumers connect via the AMQP 1.0 session:
  - events delivered as they become available.
    (No need to poll on client side)

- used to enable the publish/subscribe mechanism.
- a consumer group is a view (state, position, or offset) of
  an entire event hub.
- consumer groups enable different applications to have a
  different view of the event stream, and to read the stream
  independently at their own pace and with their own offsets:
  -Bºin a stream processing architecture, each downstream applicationº
   Bºequates to a consumer groupº
  - To write event data to long-term storage, then that storage
    writer application is a consumer group.

- there is always a default consumer group in an event hub,
  and up to 20 consumer groups (standard tier) can be created.

- at most 5 concurrent readers on a partition per consumer group.
  Bº(only one active receiver on a partition per consumer groupº
  Bº is recommended, otherwise duplicated messages will rise).º

- consumer group URI convention examples:
  //[my namespace][event hub name]/[consumer_group #1]
  //[my namespace][event hub name]/[consumer_group #2]

ºstream offsetsº
- position of an event within a partition. ("consumer-side cursor").
  Can be specified as:
  - timestamp
  - offset value.

º(Consumer status) checkpointingº
- process by which readers mark or commit their position within
  a partition event sequence.
- consumer reader must keep track of its current position in the
  event stream, and inform the checkpointing service when it
  considers the data stream complete.
  - At partition disconnect→reconnect, it will continue reading at
    checkpointed offset.
- checkpointing can be used to:
  - mark events as "complete"
  - provide failover resiliency

- connect to a partition:
  - common practice: use a "leasing mechanism" to coordinate reader
    connections to specific partitions.
- checkpointing, leasing, and managing readers are simplified by
  using the eventProcessorHost class for .NET clients.

- throughput capacity of event hubs is controlled byºthroughput unitsº
  º(pre-purchased units of capacity)º
  a throughput unit includes the following capacity:
  - ingress: up to 1 mb per second or 1000 events per second
            (ingress is throttled and a serverbusyexception is returned is
  -  egress: up to 2 mb per second or 4096 events per second.

  -Bºthroughput units are pre-purchased and are billed per hourº
  - up to 20 throughput units can be purchased for a namespace
    More throughput units in blocks of 20, up to 100 throughput units,
    can be purchased by contacting Azure support.

capture events - Azure event hubs enables you to automatically capture the streaming data in event hubs in an Azure BLOB storage | Azure data lake in the same/different region as event hub, optionally including a time or size interval. - apache avro format used for capture: - compact, fast, binary format with inline schema. - capture windowing - window has (minimum size, time) configuration with a "first wins policy" - storage naming convention: {namespace}/{eventhub}/{partitionid} /{year}/{month}/{day}/{hour}/{minute}/{second} ^ date values are padded with zeroes Auth⅋Security model - Azure event hubs security model: - only clients presenting valid credentials can send data to the hub. - a client cannot impersonate another client. - a rogue client can be blocked from sending data to an event hub. - an event publisher defines a virtual endpoint for an event hub and Shared Secret Signature (SAS) is used to Auth. - each client is assigned a unique token (stored locally) - a client can only send to one publisher with a given token - it is possible(not recommended)to equip devices with tokens granting direct access to an event hub. device will be able to send messages directly and will RºNOT be subject to throttling,º Rºneither can it be blacklisted.º - to create a sas key: - service automatically generates a 256-bit sas key named ºrootmanagesharedaccesskeyº when creating an event hubs namespace. This rule has an associated pair of primary and secondary keys that grant send, listen, and manage rights to the namespace. - additional keys can be created. (a new key per event-hub recomended). Ex C# code: // create namespace manager. string servicenamespace = "your_namespace"; string namespacemanagekeyname = "rootmanagesharedaccesskey"; string namespacemanagekey = "your_root_manage_shared_access_key"; URI uri = servicebusenvironment.createserviceuri("sb", servicenamespace, string.empty); tokenprovider td = tokenprovider.createsharedaccesssignaturetokenprovider(namespacemanage keyname, namespacemanagekey); namespacemanager nm = new namespacemanager(namespaceuri, namespacemanagetokenprovider); // create event hub with a sas rule that enables sending to that event hub eventhubdescription ed = new eventhubdescription("my_event_hub") { partitioncount = 32 }; string eventhubsendkeyname = "eventhubsendkey"; string eventhubsendkey = sharedaccessauthorizationrule.generaterandomkey(); sharedaccessauthorizationrule eventhubsendrule = new sharedaccessauthorizationrule(eventhubsendkeyname, eventhubsendkey, new[] { accessrights.send }); ed.authorization.add(eventhubsendrule); nm.createeventhub(ed); ºgenerate tokensº (one per client) -Bºtokens lifespan: resembles/exceeds that of clientº public static string sharedaccesssignaturetokenprovider. getsharedaccesssignature( string keyname, string sharedaccesskey, string resource, timespan tokentimetolive) URI should be specified as : //$$event_hub_name /publishers/$publisher_name ^ different for each token. this method generates a token with the following structure: sharedaccesssignature: sr={URI}⅋ sig={hmac_sha256_signature}⅋ se={expiration_time}⅋ ← seconds since 1970-01-01 skn={key_name} Ex: sharedaccesssignature sr=contoso⅋sig=npzdnn%2gli0ifrfjwak4mkk0rqab%2byjult%2bgfmbhg77a%3d⅋ se =1403130337⅋ skn=rootmanagesharedaccesskey - sending data must occur over an encrypted channel. - blacklisting clients by token is possible in case of leak. ºback-end applications (consumer) authenticationº An event hubs consumer group is equivalent to a subscription to a service bus topic. - a client can create a consumer group if the request token grants "manage privileges" for target event-bus|namespace - a client is allowed to consume data from a consumer group if the receive request-token grants receive rights on target consumer-group|event-hub|namespace. - no support yet for sas rules yet for individual subscriptions. or event hubs consumer groups. - common sas-keys can be used to secure all consumer groups.
Create event-hub (cli) $ az login $ az account set --subscription myAzuresub $ az group create --name $group --location eastus $ az eventhubs namespace create \ --name $event hubs namespace \ --resource-group $group \ -l eastus $ az eventhubs eventhub create \ --name $event hub name \ --resource-group $group \ --namespace-name $event hubs namespace event hubs .NET API - primary classes: ( microsoft.Azure.eventhubs nuget package ) ^install-package microsoft.Azure.eventhubs -ºeventhubclientº: AMQP communication channel -ºeventdata º: Represents an event used to publish messages to an event hub. ^namespace microsoft.Azure.eventhubs. º.NET event hubs clientº Ex: private const string eventhubconnectionstring = "event hubs namespace connection string"; private const string eventhubname = "event hub name"; var connectionstringbuilder = new eventhubsconnectionstringbuilder(eventhubconnectionstring) { entitypath = eventhubname }; eventhubclient = eventhubclient. ºcreatefromconnectionstringº( ← Instantiate client connectionstringbuilder.tostring()); for (var i = 0; i ˂ nummessagestosend; i++) { var message = $"message {i}"; await eventhubclient º.sendasyncº( ← send event new eventdata(encoding.utf8.getbytes(message))); } ºevent serializationº - eventdata class has two overloaded constructors taking bytes or byte-array, representin the payload. To convert JSON to payload use: encoding.utf8.getbytes() ºpartition keyº - set in .NET partitionsender.partitionid - Applies when: sending event data. - Optional: round-robin used if not set. - Round-robin is prefered for high availability since data will become unavailable if target partition is down, at the cost of loosing consistency (pinning ordered events to a partition id). - Par.Key: hashed value to produce a partition assignment. - used to set ther partition ºbatch event send operationsº - sending events in batches can help increase throughput. eventhubclient.createbatch (.NET) sets data objects to be sent in next sendasync call. sendasync returns a task object. retrypolicy class can be used to control client retry options. It also can be used to ensure that the batch does not exceed 1 mb with the "tryadd". ºevent consumersº eventhubclient.eventprocessorhost class processes data from event hubs needed for event readers. It is thread-safe, multi-process and also provides checkpointing and partition lease management. - It also implements an Azure storage-based checkpointing mechanism. To use it, implement the interface: ˂˂ieventprocessor˃˃ ------------------- openasync closeasync processeventsasync processerrorasync Ex: var eventprocessorhost = new eventprocessorhost( eventhubname, partitionreceiver.defaultconsumergroupname, eventhubconnectionstring, storageconnectionstring, storagecontainername); await eventprocessorhost. registereventprocessorasync ← register instance in runtime ˂simpleeventprocessor˃() ; at this point, client attempts to acquire a lease on every partition in the event hub using a "greedy" algorithm. - leases have a timeframe and must be renewed. Bºbecause event hubs does not have a direct concept of º Bºmessage counts, average cpu utilization is often the best º Bºmechanism to measure back end or consumer scale.º - if publishers publish faster than consumers can process, the cpu increase on consumers can be used to cause an auto-scale on worker instance count. ºpublisher revocationº - event hubs publisher revocation block specific publishers from sending events. (token compromised, bug detected,...) It will be the publisher's identity, part of the sas token, that will be blocked.
A.Service bus
Service bus summary
- Message bus decoupling and communicating applications.
- It's NOT an ESB. Logic App is used for that mission, allowing to
  transform/filter messages. Logic App can be used to process messages
  converting formats, and finally placing them in a Service bus.
- Decouple applications from each other.
- message is in binary format, which can contain JSON, XML, text,....

- namespace: scoping container for all messaging components.
  (Multiple queues, topics)

- Messages are sent to/received from queues.

- Messages in queues areºordered and timestampedºon arrival.
  Once accepted, message is held safely in redundant storage.
  They are delivered in pull mode (on request).

-ºtopicsº: useful in publish/subscribe scenarios.
  (vs queue, used for point-to-point communication)

  - Topic 1 ←→ N Subscriptions
                 - Have entity
                 - durably created
                 - optionally expire/auto-delete.

  - rules and filters allow to trigger optional actions,
    filter specified messages, and set or modify message

 2 Message      | Sessions enable joint and ordered handling of unbounded
   sessions     | sequences of related messages used for reliable FIFO
 3 Auto         | chain queue|subscription to another queue|topic
   forwarding   | in namespace.
                | Service Bus automatically will remove messages
                | placed in first queue|subscription (source)
                | and puts in second one (destination)
 4 dead-letter  | hold messages un-deliverable to any receiver,
   queue(DLQ)   | or that cannot be processed.
                | They can be removed from the DLQ or inspected.
 5 Scheduled    |
   delivery     |
 6 Message      | Useful when subcriptor can not process the message at this
   deferral     | moment.  message will remain in the queue|subscription, but
                | it is set aside.
 7 Batching     | enables queue|topic client accumulate
                | pending to send messages for a period,
                | then send in a single batch.
 8 Transactions | group 2+ operations into an execution scope,
                | against a single messaging entity
                | (queue, topic, subscription)
 9 Filtering    | Subscribers can define filtered-in messages
   and actions  | using 1+ named-subscription-rules.
                | Each matching rule produces a message copy
                | that may be differently annotated.
10 Auto-delete  | specify idle interval after which
   on idle      | the queue is automatically deleted.
                | 5 minutes or greater.
11 Duplicate    | allow sender to re-send same
   detection    | message, and the queue or topic
                | discards any duplicate copies.
                | (If there were doubts about first outcome)
12 SAS, RBAC,   | security protocols supported out of the box
   and Managed  |
   identities   |
13 Geo-disaster | Continue operation over different
   recovery     | region or datacenter.
14 Security     | support for standard AMQP 1.0 and
                | HTTP/REST protocols.

event vs. message services important distinction: event vs message. -ºEventº: - lightweight notification of state change. publisherRºhas no expectation about how the event is handledº. Consumer decides what to do with them. Can be classified into: - Discrete: data has info about what happened but doesn't have the full data info. ("file changed",...) Suitable for serverless solutions that scale. - Series : report a condition and are analyzable. - time-ordered and interrelated. -ºMessageº - raw data to be consumed or stored. - The message contains the full data that triggered the message pipeline. - publisher has an expectation about how the consumer handles the message. BºA contract exists between end-pointsº ºComparison of servicesº Service Purpose Type When to use -------------------------------------------------------------------------- Event Grid Reactive Event distribution React to status changes programming (discrete) -------------------------------------------------------------------------- Event Hubs Big data Event streaming Telemetry and distributed pipeline (series) data streaming -------------------------------------------------------------------------- Service Bus High-value Message Order processing, enterprise financial transactions messaging -------------------------------------------------------------------------- BºQUEUESº - FIFO message delivery to 1+ competing consumers. - achieve "temporal decoupling" of act-react. (producers - consumers) - Create queues through portal, PowerShell, CLI, or ARM templates. - send and receive messages with QueueClient object. - Receive modes: -ºReceiveAndDeleteº: Service Bus marks the message as consumed at first request. Simplest scenario, when application can tolerate not processing a message on exception. -ºPeekLockº: two-stage receive operation: -☞resilient to consumer chrases. 1) client "read" request timeout-locks message in queue to current client 2) client calls CompleteAsync and message marked in queue as consumed. (Happy path) 2) client calls AbandonAsync and message is unlocked. - Corner scenario: application crashes after processing the message, but before CompleteAsync request is issued. if duplicate processing is not tolerated, additional logic is required in the application to detect such duplicates, based upon the MessageId property of the message. "Exactly Once" (vs "At least once") processing. BºTopics and subscriptionsº - one-to-many in publish(to topic)/subscribe (to topic) pattern. A topic subscription resembles a virtual queue receiving (copies of) the messages that are sent to the topic. - Create topics and subscriptions: TopicClient class used to send messages. SubscriptionClient (similar to queues) used to receive messages. Create the subscription client instance, passing the name of the topic, the name of the subscription, and (optionally) the receive mode as parameters. - Rules and actions subscriptions can be configured to find messages with desired properties and then perform certain modifications to those properties. SQL filter expression is optional; without a SQL filter expression, any filter action defined on a subscription will be performed on all the messages for that subscription. BºMESSAGESº - payload : (can be empty if metadata is rich enough) Blind to Service Bus - metadata: key-value pair properties - User properties. - broker properties: - predefined - control broker functionality or map to standardized items. BºPREDEFINED PROPERTIES TABLEº - used with all official client APIs and in the BrokerProperties JSON object of the HTTP protocol mapping. - (equivalent AMQP protocol level listed in parentheses) + CorrelationId (correlation-id) ┐ + MessageId (message-id) │ used to help applications + ReplyTo (reply-to) ├─ ←route messages to particular + ReplyToSessionId (reply-to-group-id) │ destinations. + To (to) │ + SessionId (group─id) ┘ - ContentType (content-type) - DeadLetterSource - DeliveryCount - EnqueuedSequenceNumber - EnqueuedTimeUtc - ExpiresAtUtc (absolute-expiry-time) - ForcePersistence - Label (subject) - LockedUntilUtc - LockToken - PartitionKey - ScheduledEnqueueTimeUtc - SequenceNumber - Size - State - TimeToLive - ViaPartitionKey - message model doesn't depend on underlying protocol (HTTPS/AMQP/...) BºMESSAGE ROUTING AND CORRELATIONº - PATTERNS: -ºSimple request/replyº: publisher → request → queue1 ------- MessageId ································┐ ReplyTo ····┐ Consumer C&P v into response publisher ← ← publisher ← response · owned queue -------- · CorrelationId ←··┘ ^ One message can yield multiple replies identified by the CorrelationId -ºMulticast request/replyº: variation of request/reply publisher → message → topic → multiple subscribers become eligible . -ºMultiplexingº of streams in Sessions to single queue|subscription SessionId used to identify receiver-"queue|subscription"-session. "queue|subs." "holding" SessionId lock receive the message. -ºMultiplexed request/replyº: session feature multiplexed replies, allowing several publishers to share a single reply queue. publisher instruct consumer(s) to copy: request.SessionId → response.ReplyToSessionId , publisher will wait for session response.ReplyToSessionId Routing inside Service Bus namespace: - auto-forward chaining and topic subscription rules. Routing across Service Bus namespace: - Azure LogicApps. RºWARNº: "To" property is reserved for future use Applications must implement routing based on user properties. BºPAYLOAD SERIALIZATIONº - Payload transit: opaque, binary block. - ContentType property used to describe payload (MIME content-type recomended) Ex: application/JSON;charset=utf-8. - When using AMQP protocol, the object is serialized into an AMQP object. The receiver can retrieve those objects with the GetBody() method, supplying the expected type. - With AMQP, objects are serialized into an AMQP graph of ArrayList and IDictionary˂string,object˃ objects. Any AMQP client can decode them. -Bºapplications should take explicit control of object serializationº Bºand turn their object graphs into streams before including º Bºthem into a message and do the reverse on the receiver side. º
PRE-SETUP: ºMicrosoft.Azure.ServiceBus NuGetº package required STEP 1) Prepare Azure Queue Resource $ az group create \ ← Create resource group --name myResourceGroup \ --location eastus $ namespaceName=myNameSpace$RANDOM $ az servicebus namespace create \ ← Create Service Bus --resource-group myResourceGroup \ messaging namespace --name $namespaceName \ --location eastus $ az servicebus queue create \ ← Create Service Bus queue --resource-group myResourceGroup \ --namespace-name $namespaceName \ --name OºmyQueueº $ connectionString=$( ← Get connection string az servicebus namespace for namespace authorization-rule keys list \ --resource-group myResourceGroup \ --namespace-name $namespaceName \ --name RootManageSharedAccessKey \ --query primaryConnectionString --output tsv) Write down connection-string and queue-name. STEP 2) .NET publisher code: using System.Text; using System.Threading; using System.Threading.Tasks; using Microsoft.Azure.ServiceBus; ... const string ServiceBusConnectionString = "connectionString"; const string QueueName = Oº"myQueue"º; static IQueueClient BºqueueClientº; ... Main() { ... MainAsync().GetAwaiter().GetResult(); ← Add line ... } static async Task MainAsync() { const int numberOfMessages = 10; BºqueueClientº = new QueueClient(ServiceBusConnectionString, QueueName); Console.WriteLine("Press ENTER ..."); await OºSendMessagesAsyncº(numberOfMessages); // ← Send messages. Console.ReadKey(); await BºqueueClientº.CloseAsync(); } static async Task SendMessagesAsync(int numberOfMessagesToSend) { /*--*/try { for (var i = 0; i ˂ numberOfMessagesToSend; i++) { string messageBody = $"Message {i}"; // ← Create new message to send to the queue. var message = new Message( Encoding.UTF8.GetBytes(messageBody) ); await BºqueueClientº.SendAsync(message); } /*--*/} catch (Exception exception) { Console.WriteLine($"{exception.Message}"); /*--*/} } After run, check queue in Azure portal: ... → namespace Overview window → queue Name → queue Essentials screen. Check that "Active Message Count" is 10. STEP 3) Write code to receive messages to the queue Main() { ... MainAsync().GetAwaiter().GetResult(); ... } static async Task MainAsync() { queueClient = new QueueClient( ServiceBusConnectionString, QueueName); Console.WriteLine("Press ENTER...."); RegisterOnMessageHandlerAndReceiveMessages(); Console.ReadKey(); await BºqueueClientº.CloseAsync(); } Directly after the MainAsync() method, add the following method that registers the message handler and receives the messages sent by the sender application: static void RegisterOnMessageHandlerAndReceiveMessages() { // Configure the message handler options in terms of exception // handling, number of concurrent messages to deliver, etc. var messageHandlerOptions = new MessageHandlerOptions(ExceptionReceivedHandler) { // Maximum number of concurrent calls to the callback // ProcessMessagesAsync(), set to 1 for simplicity. // Set it according to how many messages the application // wants to process in parallel. MaxConcurrentCalls = 1, // Indicates whether the message pump should automatically // complete the messages after returning from user callback. // False below indicates the complete operation is handled // by the user callback as in ProcessMessagesAsync(). AutoComplete = false }; // Register the function that processes messages. BºqueueClientº.RegisterMessageHandler( ProcessMessagesAsync, messageHandlerOptions); } Directly after the previous method, add the following ProcessMessagesAsync() method to process the received messages: static async Task ProcessmsgsAsync( msg msg, CancellationToken token // ← Use it to fetch queueClient has closed. ) // In that case CompleteAsync()|AbandonAsync()|... { // can be skipped avoiding unnecessary exceptions. Console. WriteLine($"Received " $"SequenceNumber:{msg.SystemProperties.SequenceNumber} " $"Body:{Encoding.UTF8.GetString(msg.Body)}"); await queueClient. // ← Complete to avoid receiving again. queue client CompleteAsync( // in ReceiveMode.PeekLock mode (default) msg.SystemProperties.LockToken); } // Bº:handling exceptionsº static Task ExceptionReceivedHandler( ExceptionReceivedEventArgs exceptionReceivedEventArgs) { Console.WriteLine($"exception: {exceptionReceivedEventArgs.Exception}."); var context = exceptionReceivedEventArgs.ExceptionReceivedContext; Console.WriteLine("Exception context for troubleshooting:"); Console.WriteLine($"- Endpoint: {context.Endpoint}"); Console.WriteLine($"- Entity Path: {context.EntityPath}"); Console.WriteLine($"- Executing Action: {context.Action}"); return Task.CompletedTask; }
Storage queues
A.Queue storage: service for storing large numbers of messages that
                 can be accessed from anywhere in the world via
                 authenticated calls using HTTP or HTTPS.
- queue message: ˂= 64 KB
  - Use service bus queue for larger sizes
    - up to 256 KB in standard tier
    - up to   1 MB in premium tier
  - service bus topics (vs queue) also allows
    for message filtering before processing.

- queue length : "millions of messages"
                 up to storage account limit

- Use-cases:
  - Creating a backlog of work to process asynchronously
  - Passing messages from an Azure web role to an Azure worker role

- Queue service components

- URL format:
                                 name must be all-lowercase

PRE-SETUP: - NuGet packages: - Azure SDK for .NET - Microsoft Azure Storage Library for .NET (^ already included in Azure SDK, but Bºit is recommend to install againº Bºfrom NuGet to use latest versionº) ODataLib dependencies will be resolved to NuGet packages (vs WCF Data Services). - Azure Storage resource set up - Azure Storage resource connection string used to configure endpoints and credentials for accessing storage services. - Best Pattern: Keep it in a configuration-file: Visual Studio → Solution Explorer → app.config file: ˂configuration˃ ˂startup˃ ˂supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2" /˃ ˂/startup˃ ˂appSettings˃ ← Add here º˂add key="StorageConnectionString" º º value="DefaultEndpointsProtocol=https;AccountName=account-name;AccountKey=account-key" /˃º ˂/appSettings˃ ˂/configuration˃ Ex: ˂add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=storagesample;AccountKey=GMuzNHjlB3S9itqZJHHCnRkrokLkcSyW7yK9BRbGp0ENePunLPwBgpxV1Z/pVo9zpem/2xSHXkMqTHHLcx8XRA==" /˃ ºCreate the Queue service clientº Here's one way to create the service client: //ºCREATE QUEUE (IF NOT EXISTANT)º CloudQueueClient queueClient = ← used to retrieve queues stored storageAccount.CreateCloudQueueClient(); in Queue storage. CloudStorageAccount storageAccount = ← get from input string from config CloudStorageAccount.Parse( CloudConfigurationManager. GetSetting("StorageConnectionString")); CloudQueueClient queueClient = // ← Create the queue client. storageAccount. CreateCloudQueueClient(); CloudQueue queue = queueClient. // ← Retrieve reference to container. GetQueueReference("myqueue"); queue.CreateIfNotExists(); // ← create queue (if needed ) CloudQueueMessage message = new CloudQueueMessage("Hello, World"); queue.AddMessage(message); // ← Insert message into queue CloudQueueMessage peekedMessage = // ← Peek next message ºwithout removingº queue.PeekMessage(); from queue Console.WriteLine(peekedMessage.AsString); ºChange in place the contents of a queued messageº - Ex: update the status of task/.... CloudQueueMessage message = queue. // ← Get from queue GetMessage(); message.SetMessageContent("..."); // ← Update locally queue.UpdateMessage(message, TimeSpan.FromSeconds(60.0), // ← Make invisible for 60 seconds MessageUpdateFields.Content (extra time for client to continue | MessageUpdateFields.Visibility); working on the message locally). Typically, a retry count is used as well, if msg is retried more than n times, it will be deleted protecting against messages triggers application error each time it is processed. ºDE-QUEUE "next message"º CloudQueueMessage msg02 = queue. // ← 1) Get the next message GetMessage(); It will becomes invisible to any other code and/or client for 30 secs ... queue.DeleteMessage(msg02); // ← 2) delete message (in less than 30secs) ºAlternative Async-Await codeº CloudQueueMessage msgOut03 = new CloudQueueMessage("My message"); await queue.AddMessageAsync(msgOut03); // enqueue the message CloudQueueMessage msgIn03 = await queue.GetMessageAsync(); await queue.DeleteMessageAsync(retrievedMessage); additional options for de-queuing messages: The following code minutes have passed since the call to GetMessages, any messages which have not been deleted will become visible again. foreach (CloudQueueMessage message in queue.GetMessages( º20, º ← get 20 message in one call ºTimeSpan.FromMinutes(5)º ← Set invisibility to 5 minutes ) ) { ... // Process in less than 5 minutes, queue.DeleteMessage(message); } ºGet the queue length (estimate)º queue.FetchAttributes(); // Fetch the queue attributes. int? cachedMessageCount = queue. ApproximateMessageCount; ºDelete a queueº queue.Delete();
Search Solution
Search Overview
- search-as-a-service cloud solution
- APIs and tools for adding aºrich search experienceº
  over private, heterogenous content in web, mobile,
  and enterprise applications.
BºQuery execution is over a user-defined indexº

- Build a search corpus containing only your data, sourced from
  multiple content types and platforms.
- AI-powered indexing to extract text and features from
  image files, or entities and key phrases from raw text.
- facet navigation and filters, synonyms, auto-complete,
  and text analysis for "did you mean" auto-corrected search terms.
- Add geo-search for "find near me", language analyzers for
  non-English full text search, and scoring logic for search rank.

-ºexposed through a simple REST API or .NET SDK that º
 ºmasks the inherent complexity of information retrieval.º
- Azure portal provides administration and content
  management support, with tools for prototyping and querying your

ºAzure Search How toº
Step 1) Provision service
  - Alternatives:
    - A.Portal:
    - A.Resource Management API:
  - Price tiers:
    - free service shared with other subscribers:
    - paid tier: dedicated resources.
      - Scale types:
        - Add Replicas  : handle heavy query loads
        - Add Partitions: grow storage for more documents

Step 2) Create index
(Before uploading searchable content)
- index: database-like table holding your data and
         accepting search queries.
- Developer defines:
  -ºindex schema to mapº to reflect the structure of
                         documents you wish to search for
    (fields like in a database).
    - Created in A.Portal or programmatically using
      .NET SDK or REST API.

Step 3) Load data
- push(SDK/REST API) or pull(external data)  model.
- Indexers automate aspects of data ingestion (connecting to,
  reading, serializing data,..)
  - Indexers are available for Cosmos DB, cloud/VM hosted SQL Database,
  - Indexers can be configured on-demand/scheduled-data-refresh.

Step 4) Search
- search queries can be done through HTTP request to service endpoint

ºfeature summaryº
│ Category       │   Features                                                            │
│ Full           │                                                                       │
│ text search    │ Queries using a supported syntax.                                     │
│ and text       │ Simple query syntax provides logical|phrase search|suffix             │
│ analysis       │   |precedence operators                                               │
│                │ºLucene query syntaxº:                                                 │
│                │ extensions for fuzzy search, proximity search,                        │
│                │ term boosting, and regular expressions.                               │
│ Data           │ Azure Search indexes accept data from any source, provided it is      │
│ integration    │ submitted as a JSON data structure.                                   │
│                │                                                                       │
│                │ Optionally, for supported data sources in Azure, you can use indexers │
│                │ to automatically crawl Azure SQL Database, Azure Cosmos DB, or Azure  │
│                │ BLOB storage for searchable content in primary data stores. Azure     │
│                │ BLOB indexers can perform document cracking to extract text from      │
│                │ major file formats, including Microsoft Office, PDF, and HTML         │
│                │ documents.                                                            │
│ Linguistic     │ºAnalyzersºare components used forºtext processing during indexingºand │
│ analysis       │ºsearch operationsº There are two types.                               │
│                │ºCustom lexical analyzersº used for complex search queries             │
│                │ (phonetic matching, regular expressions).                             │
│                │                                                                       │
│                │ºLanguage analyzersº(Lucene or Microsoft) used to intelligently        │
│                │ handle language-specific linguistics including verb tenses, gender,   │
│                │ irregular plural nouns (for example, 'mouse' vs. 'mice'), word        │
│                │ de─compounding, word─breaking (for languages with no spaces), and     │
│                │ more.                                                                 │
│ Geo─search     │ Azure Search processes, filters, and displays geographic locations.   │
│                │ It enables users to explore data based on the proximity of a search   │
│                │ result to a physical location.                                        │
│ User           │ Search suggestions also works off of partial text inputs in a search  │
│ experience     │ bar, but the results are actual documents in your index rather than   │
│ features       │ query terms.                                                          │
│                │ Synonyms  associates equivalent terms that implicitly expand the      │
│                │ scope of a query, without the user having to provide the alternate    │
│                │ terms.                                                                │
│                │                                                                       │
│                │ Faceted navigation is enabled through a single query parameter. Azure │
│                │ Search returns a faceted navigation structure you can use as the code │
│                │ behind a categories list, for self─directed filtering (for example,   │
│                │ to filter catalog items by price─range or brand).                     │
│                │                                                                       │
│                │ Filters can be used to incorporate faceted navigation into your       │
│                │ application's UI, enhance query formulation, and filter based on      │
│                │ user- or developer-specified criteria. Create filters using the OData │
│                │ syntax.                                                               │
│                │                                                                       │
│                │ Hit highlighting applies text formatting to a matching keyword in     │
│                │ search results. You can choose which fields return highlighted        │
│                │ snippets.                                                             │
│                │                                                                       │
│                │ Sorting is offered for multiple fields via the index schema and then  │
│                │ toggled at query─time with a single search parameter.                 │
│                │                                                                       │
│                │ Paging  and throttling your search results is straightforward with    │
│                │ the finely tuned control that Azure Search offers over your search    │
│                │ results.                                                              │
│ Relevance      │ Simple scoring is a key benefit of Azure Search. Scoring profiles are │
│                │ used to model relevance as a function of values in the documents      │
│                │ themselves. For example, you might want newer products or discounted  │
│                │ products to appear higher in the search results. You can also build   │
│                │ scoring profiles using tags for personalized scoring based on         │
│                │ customer search preferences you've tracked and stored separately.     │
│ Monitoring     │ ─ Search traffic analytics are collected and analyzed to unlock       │
│ and reporting  │ insights from what users are typing into the search box.              │
│                │ ─ Metrics  on queries per second, latency, and throttling are         │
│                │ captured and reported in portal pages with no additional              │
│                │ configuration required. You can also easily monitor index and         │
│                │ document counts so that you can adjust capacity as needed.            │
│ Tools  for     │ In the portal, you can use the Import data wizard to configure        │
│ prototyping    │ indexers, index designer to stand up an index, and Search explorer to │
│ ⅋ inspection   │ test queries and refine scoring profiles. You can also open any index │
│                │ to view its schema.                                                   │
│ Infrastructure │ The highly available platform ensures an extremely reliable search    │
│                │ service experience. When scaled properly, Azure Search offers a 99.9% │
│                │ SLA.                                                                  │

A.portal → Create Resource → search for "Azure Search"
  → Fill details
   ºService name URL endpointº: Used for URL endpoint. Ex:

   ºresource groupº:
   ºhosting locationº:
   ºpricing tier(SKU)º: Free|Basic|Standard.
    RºWARNº: pricing tier cannot be changed once the service is created.
             You need to re-create the service.
    → click "Create"
      → Get anºauthorization API-keyºandºURL endpointº
        In the service overview page,
        locate and copy the URL endpoint on
        the right side of the page.

        In the left navigation pane, select
        Keys and then copy either one
        of the admin keys (they are equivalent).

PRE-SETUP: A valid API-key (created in STEP 1) is sent on every request.
           It establishes trust, on a per request basis, between the
           application and service.

-ºprimary/secondary admin keysºgrant full rights
  create/delete indexes, indexers, and data sources.
-ºquery keysº grant read-only access to indexes and documents,
              and are  typically distributed to client apps
              issuing search requests.

Ex: use appsetttings.JSON to retrieve API-key and service name

private static SearchServiceClient
  CreateSearchServiceClient(IConfigurationRoot configuration) {
    string searchServiceName = configuration["SearchServiceName"];
    string adminApiKey       = configuration["SearchServiceAdminApiKey"];
    SearchServiceClientºserviceClientº= new    // ← Indexes prop. provides all
         SearchServiceClient(                  //   methods needed to "CRUD"
             searchServiceName,                //   Search indexes.
             new SearchCredentials(adminApiKey)//   It also manage connection/s
         );                                    //   (share instance to avoid
    return serviceClient;                      //   many open connections)

ºDefine Search indexº)
- A single call to the Indexes.Create method will create
  the index taking and Index instance as input:
  Initialize it as follows:←─┘
  - SetºNameº  prop.
  - SetºFieldsºprop.: Field array
                      FieldBuilder.BuildForType  can be used
                      passing a model class for the type param.
┌···················  model class properties map/bind to the fields
·                     of the index.
·                     Filed instances can also set other properties
v                     like IsSearchable, IsFilterable,...
Ex. Model class:

using System;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;
using Microsoft.Spatial;
using Newtonsoft.JSON;

// The SerializePropertyNamesAsCamelCase attribute is defined in the
// Azure Search .NET SDK. It ensures that Pascal-case property names in
// the model class are mapped to camel-case field names in the index.
[SerializePropertyNamesAsCamelCase]             ← map to camel-case field
public partial class GºHotelº                     index names
    [System.ComponentModel.DataAnnotations.Key] ← a field of type string
    [IsFilterable, IsSortable, IsFacetable]       must be marked with
    public string HotelId { get; set; }           as key field
    [JSONProperty("description_fr")]           ← map to another index field name
    public string DescriptionFr { get; set; }
    [IsSearchable, IsFilterable, IsFacetable]
    public stringº[]ºTags { get; set; }
    [IsFilterable, IsSortable]
    public GeographyPoint Location { get; set; }
BºIsSearchableº: enable full-text search

   var Bºdefinitionº = new Index() {             // ← create index definition:
       Name = "hotels",
       Fields = FieldBuilder.BuildForTypeGº˂Hotel˃º()

   serviceClient.ºIndexes.Createº(Bºdefinitionº); // ← Finally create the index
                                                  //   or CloudException thrown
   serviceClient.Indexes.Delete("hotels");        /

   NOTE: Example uses synch methods for clarity.
         Async ones are prefered. (CreateAsync, DeleteAsync)

ºSTEP 3.1)º
 ISearchIndexClient indexClient =    // ← Create SearchIndexClient instance
       serviceClient.Indexes         //   to connect to index.
 // ^ ISearchIndexClient.Documents property provides
 // all the methods CRUD documents in index.
   NOTE: In typical search apps, index management and population
         is handled by a separate component from search queries.
         Indexes.GetClient is convenient for populating an index because it
         saves you the trouble of providing another SearchCredentials. It does
         this by passing the admin key that you used to create the
         SearchServiceClient to the new SearchIndexClient. However, in the
         part of your application that executes queries, it is better to
         create the SearchIndexClient directly so that you can pass in a query
         key instead of an admin key. This is consistent with the principle of
         least privilege and will help to make your application more secure.

ºSTEP 3.2)º
   package up the data to index into an IndexBatch instance:
   - 1...N "IndexAction" objects:
            IndexAction: ( document , action )
                    upload, merge, delete, etc

Depending on  actions, only certain fields must be included each document:
  │ Description             │ Necessary fields │ Notes                                    │
  │                         │ for each document│                                          │
  │ºUploadº== "upsert"      │ key, plus any    │ When updating/replacing an existing      │
  │                         │ other fields you │   document, any field that is not        │
  │                         │ wish to define   │   specified in the request will have its │
  │                         │                  │   field set to null. This occurs even    │
  │                         │                  │ when the field was previously set to a   │
  │                         │                  │ non─null value.                          │
  │ºMergeº existing document│ key, plus any    │ Any field you specify in a merge will    │
  │ with specified fields.  │ other fields you │ replace the existing field in the        │
  │ document must exists    │ wish to define   │ document. This includes fields of type   │
  │                         │                  │ DataType.Collection(DataType.String).    │
  │                         │                  │ For example, if the document contains a  │
  │                         │                  │ field tags with value ["budget"] and     │
  │                         │                  │ you execute a merge with value           │
  │                         │                  │ ["economy", "pool"] for tags, the final  │
  │                         │                  │ value of the tags field will be          │
  │                         │                  │ ["economy", "pool"]. It will not be      │
  │                         │                  │ ["budget", "economy", "pool"].           │
  │ MergeOrUpload           │ key, plus any    │                                          │
  │ Merge if document exists│ other fields you │                                          │
  │ Upload otherwise.       │ wish to define   │                                          │
  │ Delete (from index)     │ key only         │ Any fields you specify other than the    │
  │                         │                  │ key field will be ignored. If you want   │
  │                         │                  │ to remove an individual field from a     │
  │                         │                  │ document, use Merge instead and simply   │
  │                         │                  │ set the field explicitly to null.        │

   var actions = new IndexAction˂GºHotelº˃[] {
     IndexAction.Upload( new Hotel() {
       HotelId = "1",
       Category = "Luxury",
       Tags = new[] { "pool", "view", "wifi", "concierge" },
       LastRenovationDate = new DateTimeOffset(2010, 6, 27, 0, 0, 0, TimeSpan.Zero),
       Location = GeographyPoint.Create(47.678581, -122.131577)
     IndexAction.Upload( new Hotel() { HotelId = "2", ...   }),
     IndexAction.MergeOrUpload( new Hotel() { HotelId = "3", BaseRate = 129.99, }),
     IndexAction.Delete(new Hotel() { HotelId = "6" })

   var batch = IndexBatch.New(actions);  // ← 3.2) Create index batch

NOTE:º1000 documents maxºperºindexing requestº

ºSTEP 3.3)º
  Call the Documents.Index method of your SearchIndexClient to send
  data to index the IndexBatch to your search index.
  /*-------*/try {
  /*-------*/ } catch (IndexBatchException e) {
    // Sometimes when your Search service is under load, indexing
    // will fail for some of the documents in
    // the batch. Depending on your application, you can take
    // compensating actions like delaying and
    // retrying. For this simple demo, we just log the failed
    // document keys and continue.
        "Failed to index some of the documents: {0}",
        String.Join(", ", e.IndexingResults.
          Where(r =˃ !r.Succeeded).Select(r =˃ r.Key)));
  /*-------*/ }
  Console.WriteLine("Waiting for documents to be indexed...\n");

ºHow the .NET SDK handles documentsº
Node index properties are camel-case, (starts with lower case)
while each public property starts with an upper-case letter ("Pascal case").
BºThis is a common scenario in .NET applications that perform   º
Bºdata-binding where the target schema is outside the control ofº
Bºthe application developer.                                    º

RºWhy you should use nullable data typesº
If you use a non-nullable property, you have to guarantee
that no documents in your index contain a null value for
the corresponding field.
Neither the SDK nor the Azure Search service will help you
to enforce this.

RºIf you add a new field to an existing index, after update,º
Rºall documents will have a null value for that new field   º
Rº(since all types are nullable in Azure Search).º
 Fetch query API-keys (user key) (vs primary/secondary admin keys)
 A.Portal → Search "Keys"

private static SearchIndexClient
  CreateSearchIndexClient(IConfigurationRoot configuration) {
    string searchServiceName = configuration["SearchServiceName"];
    string queryApiKey = configuration["SearchServiceQueryApiKey"];
    SearchIndexClient indexClient =ºnewº // ←ºSTEP 4.1)ºCreate
      ºSearchIndexClient(º               //   SearchIndexClient instance
         new SearchCredentials(queryApiKey));
    return indexClient;

ºQueries Main Types:º
-ºsearchº searches for one or more terms in all searchable
          fields in your index.
-ºfilterº evaluates a boolean expression over all
          filterable fields in an index.
 ^They can be used together or separately.

SearchParameters parameters;
DocumentSearchResult˂Hotel˃ results;
// Search index for term 'budget' and
// return "hotelName" field
parameters = new SearchParameters() {
   Select = new[] { "hotelName" }         // ← Select
results = indexClient.Documents.
          Search˂Hotel˃(                  // ←ºSearchº
// Apply filter to index to find hotels
// cheaper than $150
// return hotelId
parameters = new SearchParameters() {
  Filter = "baseRate lt 150",             // ← Filter
  Select = new[] {
results = indexClient.Documents.

parameters = new SearchParameters() { ← // Search entire index,
  OrderBy = new[] {
              "lastRenovationDate desc" // ← order by lastRenovationDate desc,
  Select = new[] {                      // ← show
              "hotelName",              //      hotelName and
              "lastRenovationDate" },   //      lastRenovationDate

  Top = 2                               // ← take top two results

results = indexClient.Documents.
            "*", parameters);

parameters = new SearchParameters();
results = indexClient.Documents.
          Search˂Hotel˃(                // ← Search entire index  for
                   "motel",             // ← term 'motel'

ºHandle search resultsº
private static void
    DocumentSearchResult˂Hotel˃ searchResults) {
    foreach (
       SearchResult˂Hotel˃ result in
       searchResults.Results) {

Full text search Lucene full text search four stages execution: 1) query : extract search terms parsing 2) lexical : Individual query terms are sometimes analysis broken down and reconstituted into new forms to cast a broader net over what could be considered as a potential match. 3) document: search engine uses an index to matching retrieve documents with matching terms. 4) scoring : A result set is then sorted by a relevance score assigned to each individual matching document. Those at the top of the ranked list are returned to the calling app. Separate query terms from query operators and create the query Retrieves and scores matching structure (a query tree) to be documents based on the contents of the sent to the search engine. inverted index. | v Query v Query ┌4)──────────┐ Top text ┌1)──────────┐ tree │Search Index│ 50 ──────→ │Query Parser│──────────→├────────────┤ ─────→ │Simple│Full │ │Index (DDBB)│ └────────────┘ └3)──────────┘ Query │ ^ Analyzed ^ terms v │ terms efficient data structure used to ┌────────┐ store and organize searchable terms │Analyzer│ extracted from indexed documents. └2)──────┘ ^ Perform lexical analysis on query terms. This process can involve transforming, removing, or expanding of query terms. ºAnatomy of a search requestº - A search request is a complete specification of what should be returned in a result set. In simplest form, it is an empty query with no criteria of any kind. A more realistic example includes parameters, several query terms, perhaps scoped to certain fields, with possibly a filter expression and ordering rules. The following example is a search request you might send to Azure Search using the REST API. POST /indexes/hotels/docs/search?API-version=2017-11-11 { "search": "Spacious, air-condition* +\"Ocean view\"", "searchFields": "description, title", "searchMode": "any", "filter": "price ge 60 and price lt 300", "orderby": "geo.distance(location, geography'POINT(-159.47623522.227659)')", "queryType": "full" } ^ For this request, the search engine does the following: - Filters out documents where the price is at least $60 and less than $300. - Executes the query. For this query, the search engine scans the description and title fields specified in searchFields for documents that contain “Ocean view”, and additionally on the term "spacious", or on terms that start with the prefix “air-condition”. - The searchMode parameter is used to match on any term (default) or all of them, for cases where a term is not explicitly required (+). - Orders the resulting set of hotels by proximity to a given geography location, and then returned to the calling application. Most that follows is about processing of the search query: "Spacious, air-condition* +\"Ocean view\"". Filtering and ordering are out of scope. Stage 1: Query parsing "search": "Spacious, air-condition* +\"Ocean view\"", The query parser separates operators (such as * and + in the example) from search terms, and deconstructs the search query into subqueries of a supported type: - term query for standalone terms (like spacious) - phrase query for quoted terms (like ocean view) - prefix query for terms followed by a prefix operator * (like air-condition) Operators associated with a subquery determine whether the query “must be” or "should be" satisfied in order for a document to be considered a match. For example, +"Ocean view" is “must” due to the + operator. The query parser restructures the subqueries into a query tree (an internal structure representing the query) it passes on to the search engine. In the first stage of query parsing, the query tree looks like this. Boolean query searchmode any ºSupported query parsers: Simple (default) and Full Luceneº Use queryType parameter to choose one of them. -ºSimple query languageº: intuitive and robust, often suitable to interpret user input as-is without client-side processing. It supports query operators familiar from web search engines. -ºFull Lucene query languageº: extends default Simple query language by adding support for more operators and query types like wildcard, fuzzy, regex, and field-scoped queries. ºsearchMode parameterº: - default operator for Boolean queries: - any (default): space delimiter OR - all : space delimiter AND Suppose that we now set searchMode=all. In this case, the space is interpreted as an “and” operation. Each of the remaining terms must both be present in the document to qualify as a match. The resulting sample query would be interpreted as follows: +Spacious,+air-condition*+"Ocean view" A modified query tree for this query would be as follows, where a matching document is the intersection of all three subqueries: Boolean query searchmode all Choosing searchMode=any over searchMode=all is a decision best arrived at by running representative queries. Users who are likely to include operators (common when searching document stores) might find results more intuitive if searchMode=all informs boolean query constructs. Next We'll cover lexical analysis and document retrieval in Azure Search. Lexical analysis and document retrieval in Azure Search Lexical analysis Lexical analyzers process term queries and phrase queries after the query tree is structured. An analyzer accepts the text inputs given to it by the parser, processes the text, and then sends back tokenized terms to be incorporated into the query tree. The most common form of lexical analysis is linguistic analysis which transforms query terms based on rules specific to a given language: Reducing a query term to the root form of a word Removing non-essential words (stopwords, such as “the” or "and" in English) Breaking a composite word into component parts Lower casing an upper case word All of these operations tend to erase differences between the text input provided by the user and the terms stored in the index. Such operations go beyond text processing and require in-depth knowledge of the language itself. To add this layer of linguistic awareness, Azure Search supports a long list of language analyzers from both Lucene and Microsoft. Note: Analysis requirements can range from minimal to elaborate depending on your scenario. You can control complexity of lexical analysis by the selecting one of the predefined analyzers or by creating your own custom analyzer. Analyzers are scoped to searchable fields and are specified as part of a field definition. This allows you to vary lexical analysis on a per-field basis. Unspecified, the standard Lucene analyzer is used. In our example, prior to analysis, the initial query tree has the term “Spacious,” with an uppercase "S" and a comma that the query parser interprets as a part of the query term (a comma is not considered a query language operator). When the default analyzer processes the term, it will lowercase “ocean view” and "spacious", and remove the comma character. The modified query tree will look as follows: Boolean query with analyzed terms Testing analyzer behaviors The behavior of an analyzer can be tested using the Analyze API. Provide the text you want to analyze to see what terms given analyzer will generate. For example, to see how the standard analyzer would process the text “air-condition”, you can issue the following request: { "text": "air-condition", "analyzer": "standard" } The standard analyzer breaks the input text into the following two tokens, annotating them with attributes like start and end offsets (used for hit highlighting) as well as their position (used for phrase matching): { "tokens": [ { "token": "air", "startOffset": 0, "endOffset": 3, "position": 0 }, { "token": "condition", "startOffset": 4, "endOffset": 13, "position": 1 } ] } Exceptions to lexical analysis Lexical analysis applies only to query types that require complete terms – either a term query or a phrase query. It doesn’t apply to query types with incomplete terms – prefix query, wildcard query, regex query – or to a fuzzy query. Those query types, including the prefix query with term air-condition* in our example, are added directly to the query tree, bypassing the analysis stage. The only transformation performed on query terms of those types is lowercasing. Document retrieval Document retrieval refers to finding documents with matching terms in the index. This stage is understood best through an example. Let's start with a hotels index having the following simple schema: { "name": "hotels", "fields": [ { "name": "id", "type": "Edm.String", "key": true, "searchable": false }, { "name": "title", "type": "Edm.String", "searchable": true }, { "name": "description", "type": "Edm.String", "searchable": true } ] } Further assume that this index contains the following four documents: { "value": [ { "id": "1", "title": "Hotel Atman", "description": "Spacious rooms, ocean view, walking distance to the beach." }, { "id": "2", "title": "Beach Resort", "description": "Located on the north shore of the island of Kauaʻi. Ocean view." }, { "id": "3", "title": "Playa Hotel", "description": "Comfortable, air-conditioned rooms with ocean view." }, { "id": "4", "title": "Ocean Retreat", "description": "Quiet and secluded" } ] } How terms are indexed To understand retrieval, it helps to know a few basics about indexing. The unit of storage is an inverted index, one for each searchable field. Within an inverted index is a sorted list of all terms from all documents. Each term maps to the list of documents in which it occurs, as evident in the example below. To produce the terms in an inverted index, the search engine performs lexical analysis over the content of documents, similar to what happens during query processing: Text inputs are passed to an analyzer, lower-cased, stripped of punctuation, and so forth, depending on the analyzer configuration. Tokens are the output of text analysis. Terms are added to the index. It's common, but not required, to use the same analyzers for search and indexing operations so that query terms look more like terms inside the index. Inverted index for example documents Returning to our example, for the title field, the inverted index looks like this: Term Document list atman 1 beach 2 hotel 1, 3 ocean 4 playa 3 resort 3 retreat 4 In the title field, only hotel shows up in two documents: 1, 3. For the description field, the index is as follows: Term Document list air 3 and 4 beach 1 conditioned 3 comfortable 3 distance 1 island 2 kauaʻi 2 located 2 north 2 ocean 1, 2, 3 of 2 on 2 quiet 4 rooms 1, 3 secluded 4 shore 2 spacious 1 the 1, 2 to 1 view 1, 2, 3 walking 1 with 3 Matching query terms against indexed terms Given the inverted indices above, let’s return to the sample query and see how matching documents are found for our example query. Recall that the final query tree looks like this: Boolean query with analyzed terms During query execution, individual queries are executed against the searchable fields independently. The TermQuery, “spacious”, matches document 1 (Hotel Atman). The PrefixQuery, "air-condition*", doesn't match any documents. This is a behavior that sometimes confuses developers. Although the term air-conditioned exists in the document, it is split into two terms by the default analyzer. Recall that prefix queries, which contain partial terms, are not analyzed. Therefore terms with prefix “air-condition” are looked up in the inverted index and not found. The PhraseQuery, “ocean view”, looks up the terms "ocean" and “view” and checks the proximity of terms in the original document. Documents 1, 2 and 3 match this query in the description field. Notice document 4 has the term ocean in the title but isn’t considered a match, as we're looking for the "ocean view" phrase rather than individual words. On the whole, for the query in question, the documents that match are 1, 2, 3. Next We'll cover document scoring and wrap up the topic. Every document in a search result set is assigned a relevance score. The function of the relevance score is to rank higher those documents that best answer a user question as expressed by the search query. The score is computed based on statistical properties of terms that matched. At the core of the scoring formula is TF/IDF (term frequency-inverse document frequency). In queries containing rare and common terms, TF/IDF promotes results containing the rare term. For example, in a hypothetical index with all Wikipedia articles, from documents that matched the query the president, documents matching on president are considered more relevant than documents matching on the. Scoring example Recall the three documents that matched our example query: search=Spacious, air-condition* +"Ocean view" { "value": [ { "@search.score": 0.25610128, "id": "1", "title": "Hotel Atman", "description": "Spacious rooms, ocean view, walking distance to the beach." }, { "@search.score": 0.08951007, "id": "3", "title": "Playa Hotel", "description": "Comfortable, air-conditioned rooms with ocean view." }, { "@search.score": 0.05967338, "id": "2", "title": "Ocean Resort", "description": "Located on a cliff on the north shore of the island of Kauai. Ocean view." } ] } Document 1 matched the query best because both the term spacious and the required phrase ocean view occur in the description field. The next two documents match only the phrase ocean view. It might be surprising that the relevance score for document 2 and 3 is different even though they matched the query in the same way. It's because the scoring formula has more components than just TF/IDF. In this case, document 3 was assigned a slightly higher score because its description is shorter. Learn about Lucene's Practical Scoring Formula to understand how field length and other factors can influence the relevance score. Some query types (wildcard, prefix, regex) always contribute a constant score to the overall document score. This allows matches found through query expansion to be included in the results, but without affecting the ranking. An example illustrates why this matters. Wildcard searches, including prefix searches, are ambiguous by definition because the input is a partial string with potential matches on a very large number of disparate terms (consider an input of "tour*", with matches found on “tours”, “tourettes”, and “tourmaline”). Given the nature of these results, there is no way to reasonably infer which terms are more valuable than others. For this reason, we ignore term frequencies when scoring results in queries of types wildcard, prefix and regex. In a multi-part search request that includes partial and complete terms, results from the partial input are incorporated with a constant score to avoid bias towards potentially unexpected matches. Score tuning There are two ways to tune relevance scores in Azure Search: Scoring profiles promote documents in the ranked list of results based on a set of rules. In our example, we could consider documents that matched in the title field more relevant than documents that matched in the description field. Additionally, if our index had a price field for each hotel, we could promote documents with lower price. Term boosting (available only in the Full Lucene query syntax) provides a boosting operator ^ that can be applied to any part of the query tree. In our example, instead of searching on the prefix air-condition*, one could search for either the exact term air-condition or the prefix, but documents that match on the exact term are ranked higher by applying boost to the term query: air-condition^2||air-condition*. Scoring in a distributed index All indexes in Azure Search are automatically split into multiple shards, allowing us to quickly distribute the index among multiple nodes during service scale up or scale down. When a search request is issued, it’s issued against each shard independently. The results from each shard are then merged and ordered by score (if no other ordering is defined). It is important to know that the scoring function weights query term frequency against its inverse document frequency in all documents within the shard, not across all shards! This means a relevance score could be different for identical documents if they reside on different shards. Fortunately, such differences tend to disappear as the number of documents in the index grows due to more even term distribution. It’s not possible to assume on which shard any given document will be placed. However, assuming a document key doesn't change, it will always be assigned to the same shard. In general, document score is not the best attribute for ordering documents if order stability is important. For example, given two documents with an identical score, there is no guarantee which one appears first in subsequent runs of the same query. Document score should only give a general sense of document relevance relative to other documents in the results set. Wrap-up The success of internet search engines has raised expectations for full text search over private data. For almost any kind of search experience, we now expect the engine to understand our intent, even when terms are misspelled or incomplete. We might even expect matches based on near equivalent terms or synonyms that we never actually specified. From a technical standpoint, full text search is highly complex, requiring sophisticated linguistic analysis and a systematic approach to processing in ways that distill, expand, and transform query terms to deliver a relevant result. Given the inherent complexities, there are a lot of factors that can affect the outcome of a query. For this reason, investing the time to understand the mechanics of full text search offers tangible benefits when trying to work through unexpected results. This article explored full text search in the context of Azure Search. We hope it gives you sufficient background to recognize potential causes and resolutions for addressing common query problems.

AWS (v0.1)
External Links
- Tutorials
Who is who
Forcibely incomplete but still pertinent list of "core" people:

- Danilo Poccia: @[]
  - Principal Evangelist, Serverless @AWSCloud.
  - Author of AWS Lambda in Action from Manning.

- Jeff Barr:
  - Chief Evangelist for AWS. Author of blog @[] (Since 2004)

- Varun Jewalikar, Software Engineer at Prime Video (See AWS Chaos Engineering)
- Adrian Hornsby: Principal Developer Advocate (Architecture) at AWS
AWS-DevOps Essential
Price Calculator
- Simple Monthly Calculator:
The actual cost can be observed on AWS‘ billing page.
At the bottom of the page, there is a "Set your first billing alarm"
link that allows to define an email alarm as soon as a certain
threshold is exceeded.

RºWARN: for users that are not in the East of the USº
 """ I was a little bit confused that the  "Set your first billing alarm"
     link @[ºus-east-1º&#s=Alarms&alarmAction=ListBillingAlarms]
     contains a variable ºregion=us-east-1º, while I am using resources in
     The corresponding linkºregion=eu-central-1º...
     does NOT allow to set any billing alarms.
     I assume that billing for all regions is performed centrally in US East
     for all regions (I hope).
OnPremise vs AWS 
           TRADITIONAL                                    AWS
        INFRASTRUCTURE                         INFRASTRUCTURE
 │ Firewalls,ACLs,Admins          │   Security Groups/Network ACLs/AWS IAM │
 │ Router,Network Pipeline,Switch │   ELB, VPC                             │
 │ On─Premises Servers            │   AMI ──→ EC2 Instances                │
 │ DAS, SAN, NAS  RDBMS           │   EBS   EFS    S3   RDS                │
AWS Free Tier
- sign into @[]
- scroll down and push the "Get Started for Free" button.
  - free tier trial account
    - up to 12 months
    - up to two time 750 hrs of computing time;
    - Linux/Windows 2012 server on a small VM:
      - t1.micro is free tier
      - t2.nano  isRºNOTº free tier
Guided tour
of core products
- TODO: Installation:

  - add aws like:
  $ aws configure
  AWS Access Key ID [****************FJMQ]:
  AWS Secret Access Key [****************DVVn]:
  Default region name [eu-central-1a]: eu-central-1
  Default output format [None]:

  Ex 1:
  $ aws ec2 describe-key-pairs --key-name AWS_SSH_Key

AWS CLI v2 @[ ] - Includes SSO and Interactive Usability Features.
AWS Security

-  User, Group, and Role management with IAM
-  Audit trails with CloudTrail
-  Threat detection and intelligence with GuardDuty
-  Encryption with KMS
(I)dentity and (A)ccess (M)anagement
IAM Tags
  Amazon Web Services (AWS) recently enabled tags for IAM users and roles to
ease the management of IAM resources. Notably, this release also includes the
ability to embrace attribute-based access control (ABAC) and match AWS
resources with IAM principals dynamically to "simplify permissions management
at scale"
(K)ey (M)anagement (S)ervice
Secret Manager
AWS Secrets Manager helps you protect secrets needed to access your
applications, services, and IT resources. The service enables you to easily
rotate, manage, and retrieve database credentials, API keys, and other
secrets throughout their lifecycle. Users and applications retrieve secrets
with a call to Secrets Manager APIs, eliminating the need to hardcode
sensitive information in plain text. Secrets Manager offers secret rotation
with built-in integration for Amazon RDS, Amazon Redshift, and Amazon
DocumentDB. Also, the service is extensible to other types of secrets,
including API keys and OAuth tokens. In addition, Secrets Manager enables you
to control access to secrets using fine-grained permissions and audit secret
rotation centrally for resources in the AWS Cloud, third-party services, and
- Rotate secrets safely
- Manage access with fine-grained policies
- Secure and audit secrets centrally
- Pay as you go

Amazon announced the launch of the AWS Secrets Manager, which makes it easy
for customers to store and retrieve secrets using an API or the AWS Command
Line Interface (CLI). Furthermore, customers can rotate their credentials
with the built-in schedule feature or custom Lambda functions. The AWS
Secrets Manager enables users to centralize the management of secrets of
distributed services and applications.
Automate Sec.Rule Update

- STEP 1: Verify that AWS user has the needed rights/permissions.
          for AmazonEC2FullAccess policy

- STEP 2: Test that you can see the security policies
          Example output
          $ aws ec2 describe-security-groups
          → {
          →   "SecurityGroups": [
          →     {
          →       "IpPermissionsEgress": [
          →         ...(egress rules)...
          →       ],
          →       "Description": "default VPC security group",
          →       "IpPermissions": [
          →         ...(ingress rules)...
          →       ],
          →       "GroupName": "default",
          →       "VpcId": "vpc-a6e13ecf",
          →       "OwnerId": "923026411698",
          →       "GroupId": "sg-0433846d"
          →     },
          → ...(other security groups)...
          → }

- STEP 3:
  Test adding/removing new ingress rules:
  $ EXTERNAL_IP01=$(wget -qO -)
  $ CidrIp01="${EXTERNAL_IP01}/32"
  $ IP_PERMISSIONS="${IP_PERMISSIONS} \"IpProtocol\": \"tcp\","
  $ IP_PERMISSIONS="${IP_PERMISSIONS} \"FromPort\"  : 22,"
  $ IP_PERMISSIONS="${IP_PERMISSIONS} \"ToPort\"    : 22, "
  $ IP_PERMISSIONS="${IP_PERMISSIONS} \"IpRanges\"  : [{\"CidrIp\": \"${CidrIp01}\"}]"
  $ SG_ID="sg-0123456d"

  $ aws ec2 authorize-security-group-ingress --group-id ${SG_ID} \
        --dry-run \                            ← Check changes. Do not update
        --ip-permissions '${IP_PERMISSIONS}'
    remove like:
  $ aws ec2    revoke-security-group-ingress --group-id ${SG_ID} \
        --ip-permissions '${IP_PERMISSIONS}'
AWS CloudTrail is a service that enables governance, compliance, operational
auditing, and risk auditing of your AWS account. With CloudTrail, you can log
, continuously monitor, and retain account activity related to actions across
your AWS infrastructure. CloudTrail provides event history of your AWS
account activity, including actions taken through the AWS Management Console,
AWS SDKs, command line tools, and other AWS services. This event history
simplifies security analysis, resource change tracking, and troubleshooting.

Amazon GuardDuty is a threat detection service that continuously monitors for
malicious activity and unauthorized behavior to protect your AWS accounts and
workloads. With the cloud, the collection and aggregation of account and
network activities is simplified, but it can be time consuming for security
teams to continuously analyze event log data for potential threats. With
GuardDuty, you now have an intelligent and cost-effective option for
continuous threat detection in the AWS Cloud. The service uses machine
learning, anomaly detection, and integrated threat intelligence to identify
and prioritize potential threats. GuardDuty analyzes tens of billions of
events across multiple AWS data sources, such as AWS CloudTrail, Amazon VPC
Flow Logs, and DNS logs. With a few clicks in the AWS Management Console,
GuardDuty can be enabled with no software or hardware to deploy or maintain.
By integrating with AWS CloudWatch Events, GuardDuty alerts are actionable,
easy to aggregate across multiple accounts, and straightforward to push into
existing event management and workflow systems.

Amazon has added another set of new threat detections to its GuardDuty service
in AWS. The three new threat detections are two new penetration testing
detections and one policy violation detection.

Amazon GuardDuty is a threat detection service available on AWS that
continuously monitors for malicious or unauthorized behaviour to help customers
protect their AWS accounts and workloads. When a threat is detected, the
service will send a detailed security alert to the GuardDuty console and AWS
CloudWatch Events – thus making alerts actionable and easy to integrate into
existing event management and workflow systems.
Cloud Custodian
- Opensource Cloud Security, Governance, and Management
  The Path to a Well Managed Cloud

- Cloud Custodian enables users to be well managed in the cloud. The simple YAML
  DSL allows you to easily define rules to enable a well-managed cloud
  infrastructure, that's both secure and cost optimized. It consolidates many of
  the ad-hoc scripts organizations have into a lightweight and flexible tool,
  with unified metrics and reporting.
- Package providing classes to parse AWS IAM and Resource Policies.
- Additionally, it can expand wildcards in Policies using permissions
  obtained from the AWS Policy Generator.
VM images types
Amazon VM images types:
- AKI: Amazon Kernel Image
- AMI: Amazon Machine Image
- ARI: Amazon Ramdisk Image

EC2 introduction

- Per default, AWS is assigning a dynamic private and a dynamic public IP address.
 ºpublic IP address and DNS name will change every time you restart the instanceº
- Deleting an instance is done by "Terminating" it.
  (it will still be visible in the instance dashboard as "Terminated" for a "long time")

Ex: Install Ubuntu from EC2 image repository
  - Enter EC2 Console then "Launch Instance".
    - Choose "Ubuntu HVM version" (looks to have better performance)
    Rºonly t1.micro is available for "Free tier"º
  - Review setup and "Launch"

  - Adapt Security Settings
    - click on "Edit security groups"
      - From the drop down list of the Source, select "My IP",
        then press "Review and Launch".
    - review instance data again and "Launch"

  - Create and download ºSSH Key Pairº
    - call the key "AWS_SSH_key" and download the generated PEM file

  - Check Instance Status
    - After clicking on the Instance Link, we will see that
      the instance is running and the "Status Checks" are being performed.
    - Public IP and DNS name (FQDN) will be displayed too.
      (remember that they change every time the image is started,
       a so-called Elastic IP -free of charge- needs to be rented
       from AWS to avoid this)

ºSTOP the Instance on AWSº
- Select the instance, choose Actions→Instance State→Stop.

ºDESTROY the Instance on AWSº
- Select the instance, choose Actions→Instance State→Terminate.

   Vagrant potentially allows for more sophisticated provisioning tasks
   when compared to AWS CLI commands like Software Installation and upload
   and execution of arbitrary shell scripts.

NOTE: Vagrant creates a local dummy Vagrant box supporting the AWS provider,
  used only to spin up a remote AWS (AMI) image in the cloud.
  ºNo Vagrant box is uploaded during the processº

  - (Optional) Set HTTP proxy, if needed
    - export http_proxy=''
    - export https_proxy=''
      replace with set on Win*

  - Install the VagrantºAWS pluginº
    $ vagrant plugin install vagrant-aws

  -ºDownload dummy box:º
    $ vagrant box add dummy \

  - "init vagrant enviroment"
    $ mkdir MyVagrantBox
    $ cd    MyVagrantBox
    $ vagrant init
              Will create a template Vagrantfile

  - Add next lines to "Vagrantfile"
    # Vagrantfile
    Vagrant.configure(2) do |config|
     config.vm.provider :aws do |aws, override|
       aws.access_key_id = ENV['AWS_KEY']
       aws.secret_access_key = ENV['AWS_SECRET']
       aws.keypair_name = ENV['AWS_KEYNAME']
       aws.ami = "ami-87564feb"                         ← See ami list in EC2 web console
       aws.region = "us-west-1"                         ← adapt to your (signed in) region
       aws.instance_type = "t2.micro" = "dummy"                        ←  Problem:
                                                           - Most boxes do not support AWS.
                                                           Work around:
                                                           - load dummy box with AWS provider
                                                             and override the image that
                                                              spin up in the Cloud

       override.ssh.username = "ubuntu"
       override.ssh.private_key_path = ENV['AWS_KEYPATH']  ← EC2 console/Net.Sec/Key Pairs

   - Add a IAM user and apply the appropriate permissions
    - if not already done, create new user on the AWS IAM Users page, .
    - Assign required access rights to user like:
      - go to @[]
                                                               adapt to your setup
      - Click the "Get Started" button, if the list of policies is not visible already:
        you should see the list of policies and a filter field.
      - In the Filter field, search for the term ºAmazonEC2FullAccessº (Policy)
      - Click on this policy, then choose the tab Attached Identities.
      - Click "Attach" button and attach the main user.

  - create the launch script like:
    export AWS_KEY='your-key'            ← Create them on the "users" tab
    export AWS_SECRET='your-secret'        of the IAM console:
    export AWS_KEYNAME='your-keyname'      - click on "create new users"
    export AWS_KEYPATH='your-keypath'        You will be displayed the needed key/secret
    vagrant up --provider=aws

  - ./
    Bringing machine 'default' up with 'aws' provider...
    ==> default: Warning! The AWS provider doesn't support any of the Vagrant
    ==> default: high-level network configurations (``). They
    ==> default: will be silently ignored.
    ==> ...
    ==> default: Waiting for SSH to become available...
    ... Rº(can take up to 20 minutes in free-tier)º
    ==> default: Machine is booted and ready for use!

  - Update the security group manually to allow SSH access to the instance.
    (Appendix B shows how to  automate with a shell script)
    Go to EC2 console/ºNetwork&Securityº/Sec.Groups,
    - we can find the default security group.
    - Edit the inbound rule to allow the current source IP address.

ºDestroy the Instance (save money!!!) º

  $ vagrant destroy

AWS Provider

- interact with many AWS resources.

- Terraform provider credentials must be configured (TODO)
  The following methods are supported:
   - Static/hardcoded credentials. ºR(discouraged)º
     provider "aws" {
       region     = "us-west-2"


provider "aws" {                 # ← STEP 1: Set provider
  version = "~˃ 2.0"
Gºregion  = "us-east-1"º         # ← alt: $ export AWS_DEFAULT_REGION="us-west-1"
# Credentials
# access_key =Rº"my-access-key"º # ← Hardcoded credentials are discouraged
# secret_key =Rº"my-secret-key"º # BºAlt 1: Use next ENV.VARsº
                                 #     (override use of AWS_SHARED_CREDENTIALS_FILE/AWS_PROFILE)
                                 #     -BºAWS_ACCESS_KEY_ID º
                                 #     -BºAWS_SECRET_ACCESS_KEY º
                                 #     -BºAWS_SESSION_TOKEN º (if applicable)
                                 #   Alt 2: Use Shared credentials file
                                 #     $HOME/.aws/credentials
                                 #     ^^^^^^^^^^^^^^^^^^^^^^
                                 #     Default location can be replaced by
                                 #     with AWS_SHARED_CREDENTIALS_FILE profile
                                 #     (also supporte by matching profile configuration
                                 #      AWS_PROFILE ENV.VAR)
                                 #     provider "aws" {
                                 #       ...
                                 #       shared_credentials_file = "/Users/tf_user/.aws/creds"
                                 #       profile                 = "customprofile"
                                 #     }
                                 #     AWS_SDK_LOAD_CONFIG=1 for advanced AWS client configs,
                                 #                           (profiles using source_profile or
                                 #                            role_arn configs)

resource "aws_vpc" "example" {   # ← STEP 2: Create VPC
  cidr_block = ""

resource list

AWS Auto Scaling monitors your applications and automatically adjusts
capacity to maintain steady, predictable performance at the lowest possible
application scaling for multiple resources across multiple services.
- EC2 instances and Spot Fleets
- ECS tasks
- DynamoDB tables and indexes
- Aurora Replicas.
ssh over SSM
SSH over AWS SSM. No bastions or public-facing instances. SSH user management
through IAM. No requirement to store SSH keys locally or on server.
Firecracker Ligthweight Virtualization

Firecracker implements a virtual machine monitor (VMM) that uses the Linux
Kernel-based Virtual Machine (KVM) to create and manage microVMs. Firecracker has
a minimalist design. It excludes unnecessary devices and guest functionality to
reduce the memory footprint and attack surface area of each microVM.
This improves security, decreases the startup time, and increases hardware utilization.
Firecracker currently supports Intel CPUs, with planned AMD and Arm support.
Firecracker will also be integrated with popular container runtimes such as containerd.
- ECS: Elastic Container Service
- """ Highly secure, reliable, and scalable way to run containers """
- "because ECS has been a foundational pillar for key Amazon services,
   it can natively integrate with other services such as Amazon Route 53,
   Secrets Manager, AWS Identity and Access Management (IAM),
   and Amazon CloudWatch providing you a familiar experience to deploy

ECR - ECR: Elastic Container Registry (private "Dockerhub") @[] AWS CI flow AWS CI flow "==" CodePipeline + CodeBuild. Ex. Dev Pipeline: Flux CD scans every 2 minutes the Registry for new images ┌─────────┐ AWS CodeBuild v │ │Dev│ → git push → │ AWS │ → (run Buildspec) → │AWS│ │AWS│ │CodeCommit│ - testº*1º │ECR│ │EKS│ │repository│ - package │ ^ - build OCI image └────────┘ (or Github) On new image detected trigger new deployment └────┬────┘ Integration with ECR will be similar º*1:º Run unit tests, Sonarqube(QA), ... Deploying sources to the S3 Repository of artifacts. Buildspec: - build (stages) specification YAML file - collection of build commands and related settings - it can be placed in source code or s3. - note: if 'install' or 'pre_build' fails, build stops. if build fails, post_build is still executed. Example Buildspec: version: 0.2 ← Recomended version phases: ºinstall:º ← Setup build packages and variables · runtime-versions: · java: corretto11 · commands: · # Install Maven · - wget · - tar xzvf apache-maven-3.6.3-bin.tar.gz -C /opt/ · # Add Maven to classpath · - export PATH=/opt/apache-maven-3.6.3/bin:$PATH · # Extract Artifact ID · - ARTIFACT_ID=$(mvn help:evaluate -Dexpression=project.artifactId -q -DforceStdout) · # Extract Group ID and get its domain · - GROUP_ID=$(mvn help:evaluate -Dexpression=project.groupId -q -DforceStdout) · - DOMAIN=${GROUP_ID##*.} · # Extract Version · - VERSION=$(mvn help:evaluate -Dexpression=project.version -q -DforceStdout) · # Login to AWS ECR · - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email) · # Assign ECR Repository URI · - REPOSITORY_URI=${AMI_ID}${DOMAIN}-${ARTIFACT_ID} ºbuild:º · commands: · # Validate, compile and test · - mvn test -T 2C ºpost_build:º commands: - | if [[ $CODEBUILD_BUILD_SUCCEEDING == 1 ]] ; then mvn deploy \ -Dmaven.test.skip \ -DaltDeploymentRepository=snapshots::default::s3://s3mavenrepo01/snapshots aws s3 cp s3://automation-yaml-files-location/obp-microservice-deploy/Dockerfile Dockerfile docker build -t $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION --build-arg ARTIFACT_ID=$ARTIFACT_ID --build-arg VERSION=$VERSION . docker push $REPOSITORY_URI:$CODEBUILD_RESOLVED_SOURCE_VERSION fi
ECS vs EKS @[]
Setup CodeCommit Access
REF: @[]
(Cli REF @[])
- Setup for HTTPS users using Git credentials
  configure Git credentials:
  - simplest way: Setup credentials in IAM console
    then use those credentials for HTTPS connections.

NOTE: if local computer already configured to use 
      credential helper for CodeCommit, then remove 
      such info in '.gitconfig'.

Step 1: Initial configuration for CodeCommit

Follow these steps to set up an AWS account, create an IAM user, and configure access to CodeCommit.

To create and configure an IAM user for accessing CodeCommit → Sign Up
      → Create or reuse IAM user 
        - Check  access-key-ID and a secret-access-key associated
          to IAM user are in place.
        - Check  no policies attached to the user expressly deny
          AWS KMS actions required by CodeCommit.
          (CodeCommit requires 'KMS')
          (see AWS KMS and encryption)
        → Sign into
          → Open IAM console
            → choose Users (@navigation pane)
              → choose IAM user for CodeCommit:
                → choose Add Permissions (@Permissions tab)
                  → "Attach existing policies directly"
                    (@Grant permissions)
                    → List of Policies: 
                      select AWSCodeCommitPowerUser 
                      (or another managed policy for CodeCommit access).
                      → Next → Review (review list of policies)
                        →  click "Add permissions".

      → Create Git credentials for HTTPS connections to CodeCommit
        RºWARNº: Sign in as IAM user who will create and use Git credentials
          → Users → choose IAM user from list.
            → Security Credentials tab@User Details
              → HTTPS Git credentials for AWS CodeCommit
                → Generate (You cannot choose your own user name
                  or password for Git credentials).
                → Copy user name + password  and Close
                  RºWARNº: Password can not be recovered later on, 
                           it must be reseted.

      → Connect to the CodeCommit console and clone the repository
        (Can be skipted if some admin has already sent the name and
         connection details for the CodeCommit repository)
        → Open
          → Choose correct AWS Region.
            → Find and choose appropiate repository
              → click Clone URL  → choose protocol → copy URL
              → Al local PC console: $ git clone $URL 
                $ git clone\
                            /v1/repos/MyDemoRepo my-demo-repo

- See also:
Serverless Lambdas
Infra. As Code
Home   @[]
Doc    @[]
GitHub @[]
Gitter @[]
StackO @[]

-ºJAVA API Ref:º

- TypeScript and Python:
RºNote: AWS CDK is prefered:º
  - A 10 lines AWS-CDK produces a 500 lines CloudFormation config file.

AWS CloudFormation:
- common language to describe and provision all
  the infrastructure resources in a cloud environment,
  using a programming languages or a simple text file
  to model and provision, in an automated and secure manner,
  all the resources needed for your applications across all
  regions and accounts.
ºThis gives you a single source of truth for your AWS resources.º

- See also @[]

  I’ve used a tool called Sceptre ( with
  a lot of success. I’ve found that using vanilla CloudFormation via the aws-
cli to be very frustrating in comparison.

    It defines some nice conventions to using Cloud Formation. For example,
stack configuration (e.g. parameters, region, etc.) are stored in YAML
files in a configuration directory, where CloudFormation templates are
stored in the templates/ directory. Cloud Formation stacks are named via
convention of the file name and path.

   The CLI is a lot easier to use. Rather than having to switch between
create-stack, delete-stack, and update-stack — you can simple run
sceptre launch . Sceptre will figure out if the stack needs to
be created or is in a UPDATE_ROLLBACK_FAILED state — and either
create the stack, update the stack, or delete it and re-create it.

   Additionally, it shows the output of the CloudFormation events pane right
in the CLI so you don’t have to navigate windows to see logs.

   3. You can extend Sceptre to add functionality. For example, we store some
secrets in SecretsManager and it was trivially easy to configure Sceptre
to pull a secret out of SecretsManager and pass the encrypted string as a
parameter to a CloudFormation

   4. It allows you to use Jinja templates — which greatly simplifies
CloudFormation templates with a lot of repetition (e.g. VPC / Subnet stacks
across multiple AZs)

   I was personally drawn to it because I could use native CloudFormation. I
could use all of my other tooling and resources. It just added a very nice
convention over top of it. We’ve used it for the last 10 months or so and
it has been very, very nice!

Is there a common wrapper around AWS/Azure/GCloud/...?
"""...For serverless infrastructure you could use Serverless Framework
  (, tag:serverless-framework).
- With this framework you can deploy serverless infra to all these clouds
  with minimal changes to the actual source code."""
Hybrid Cloud
AWS Outposts
Amazon Releases AWS Outposts, Enabling Hybrid Data Center Architectures

In a recent blog post, Amazon announced the release of AWS Outposts,
which allows AWS customers to take advantage of a single-vendor
compute and storage solution. The Outposts architecture is based upon
Amazon public cloud compute architecture but is hosted in a customer
data center. This solution allows customers to take advantage of AWS
technology, but addresses local processing and low latency
requirements. Customers place infrastructure orders online, Amazon
will then ship the modular compute rack and a trained AWS technician
will connect, set up and validate the installation.
Systems Manager
- Remote Management agent based platform for configuring, controlling,
  and governing on premise servers from within the EC2 console.
- install Systems Manager agent on on-premises server, then
  execute commands remotely, ensure servers remain in specific state,
  and enforce configuration management requirements.
- An elastic network interface is a logical networking component in a
  VPC that represents a virtual network card. It can include the
  following attributes:
  - A primary private IPv4 address from the IPv4 address range of your VPC
  - One or more secondary private IPv4 addresses from the IPv4 address range of your VPC
  - One Elastic IP address (IPv4) per private IPv4 address
  - One public IPv4 address
  - One or more IPv6 addresses
  - One or more security groups
  - A MAC address
  - A source/destination check flag
  - A description
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically
isolated section of the AWS Cloud where you can launch AWS resources in a
virtual network that you define. You have complete control over your virtual
networking environment, including selection of your own IP address range,
creation of subnets, and configuration of route tables and network gateways.
You can use both IPv4 and IPv6 in your VPC for secure and easy access to
resources and applications.
Elastic Load Balancing automatically distributes incoming application traffic
across multiple targets, such as Amazon EC2 instances, containers, IP
addresses, and Lambda functions. It can handle the varying load of your
application traffic in a single Availability Zone or across multiple
Availability Zones. Elastic Load Balancing offers three types of load balancers
that all feature the high availability, automatic scaling, and robust security
necessary to make your applications fault tolerant.
Network:Direct Connect
AWS Direct Connect is a cloud service solution that makes it easy to establish
a dedicated network connection from your premises to AWS. Using AWS Direct
Connect, you can establish private connectivity between AWS and your
datacenter, office, or colocation environment, which in many cases can reduce
your network costs, increase bandwidth throughput, and provide a more
consistent network experience than Internet-based connections.
DNS:Route 53
Amazon Route 53 is a highly available and scalable cloud Domain Name System
(DNS) web service. It is designed to give developers and businesses an
extremely reliable and cost effective way to route end users to Internet
applications by translating names like into the numeric IP
addresses like that computers use to connect to each other. Amazon
Route 53 is fully compliant with IPv6 as well.
Storage Matrix
ºElastic Block Store(EBS)º            ºElastic File System(EFS)º
 - Persistent local storage for:      - simple, scalable, elastic FS for
   - C2                                 BºLinux-based workloadsº for use
   - databases                          with Cloud/on-premises resources.
   - data warehousing                   -BºScales on demand to petabytesº
   - enterprise applications              without disrupting apps,shrinking
   - Big Data processing                  when removing files.
   - Backup/recovery

ºFSx for Lustreº                     GºSimple Storage Service ("S3")º
- Fully managed FS optimized for      - scalable, durable platform to make data
  compute-intensive workloads(IA,       accessible from Internet for
  media data processing, ...)           user-generated content, active archive,
  seamlessly integrated with S3         serverless computing, Big Data storage
                                        backup and recovery

ºS3 Glacier/Glacier Deep-Archiveº     ºAWS Storage Gatewayº
- Highly affordable long-term storage - hybrid storage cloud augmenting
  classes that can replace tape for     Bºon-premisesº environment
  archive and regulatory compliance     Bºfor bursting, tiering or migrationº

ºCloud Data Migration Servicesº        ºAWS Backupº
- services portfolio to help simplify  - fully managed backup service that
  moving data of all types and sizes     makes it easy to centralize and automate
  into and out of the AWS cloud          the back up of data across AWS services
                                         in cloud/Storage-gateway
- EFS: Elastic File System Service
- simple, scalable, fully managed elasticºNFS file systemº (Cloud and on-premises)
Lake Formation
- fully managed service that makes it much easier for customers
   to build, secure, and manage data lakes.
Decoupled system integration. Events and Message queues
Recently Amazon announced the general availability of the Schema
Registry capability in the Amazon EventBridge service. With Amazon
EventBridge Schema Registry, developers can store the event
structure-or schema-in a shared central location and map those
schemas to code for Java, Python, and Typescript, meaning that they
can use events as objects in their code.

With this new feature, Amazon's EventBridge is now a competitive
service in comparison with other cloud vendors that provide similar
services. Microsoft offers EventGrid, which has been GA since the
beginning of 2018 and received several updates including advanced
filtering, retry policies, and support for CloudEvents. However, the
service lacks a schema registry capability. Moreover, the same
applies to Triggermesh's EveryBridge. This event bus can consume
events from various sources, which developers can use to start
serverless functions that are running on any of the major cloud
providers as well as on-premises.
Full Text Search
- highly accurate and easy to use enterprise search service
- powered by machine learning.
- natural language search capabilities to websites and applications
- content added from file systems, SharePoint, intranet sites,
  file sharing services, ... into a centralized location.
monitor, troubleshoot, and optimize
- Check aws.region!!!!
- monitoring and observability service built for:
  - DevOps engineers
  - developers
  - site reliability engineers (SREs)
  - IT managers.
- Cloud and on-premises
- CloudWatch provides data and actionable insights to:
  - monitor applications/resources/services
  - respond to system-wide performance changes
  - optimize resource utilization
  - get a unified view of operational health.
-  collected  data from logs, metrics, and events
- detect anomalous behavior and set alarms.

CWL @[] - CloudWatch Logs CLI, helping to monitor CloudWatch logs on the command line. - The AWS CLI displays logs in JSON format, and while that can be processed with another tool like jq, it's a bit of a pain. - cwl simplify parameters, choosing sane defaults.
Elastic Cache
- seamlessly set up, run, and scale popular
open-Source compatible in-memory data stores in the cloud. Build data-intensive
apps or boost the performance of your existing databases by retrieving data
from high throughput and low latency in-memory data stores. Amazon ElastiCache
is a popular choice for real-time use cases like Caching, Session Stores,
Gaming, Geospatial Services, Real-Time Analytics, and Queuing.

Amazon ElastiCache offers fully managed Redis and Memcached for your most
demanding applications that require sub-millisecond response times.
RDS proxy
- fully managed, highly available database proxy for MySQL and PostgreSQL
  running on Amazon RDS and Aurora.
- Tailored toºarchitectures opening/closing database connections at a high rateº

- RDS Proxy allows apps to pool and share connections established with the database.
- Avoid exhausting database memory and compute resources.
- Corey Quinn, cloud economist and author of the  Last Week in AWS newsletter,
  summarized: "...This solves the '10,000 Lambda functions just hugged your database to death’
App Development
Easily manage software development activities in one place.

- Applications on AWS.
- Set up your entire continuous delivery toolchain 
  (develop, build, deploy/delivery) in minutes.
- Unified user interface.
- Easily manage access with project built-in role-based
  policies that follow IAM best practices:
  - owners
  - contributors
  - viewers
  (No need to manually configure custom policies for each service)

- Integration with AWS CodeDeploy and AWS CloudFormation 
  to deploy in EC2 and AWS Lambda.

- project templates for EC2, AWS Lambda, and AWS Elastic Beanstalk
  (Java, JS, Python, Ruby, PHP)
  -  Visual Studio, Eclipse or AWS cli.  
- project management dashboard.
  - issue tracking (powered by JIRA)
    from backlog of work items to teams' recent code deployments.

  - Charged only for AWS resources provisioned for devel/run

- Source Control alternatives:
  - AWS CodeCommit
  - GitHub

- Centralize monitoring for commits, builds, tests, deployments.

BºAWS CodeCommitº:
  - "Github" like fully-managed build service that makes it
    possible to build, test, and integrate code more frequently. 
  - High Availability and Durability
    (S3 and DynamoDB as storage backend).
  - Encrypted data redundantly stored across multiple facilities.
  - up to 1,000 repositories by default and no limits upon request.
  - AWS SNS Notifications and Custom Scripts.
    notifications include status message + link to resource
  - Amazon SNS HTTP webhooks or Lambda functions reactive

BºAWS CodeBuildº:
  - fully-managed build service (build, test, integrate)

BºAWS CodePipelineº:
  - Continuous integration and continuous delivery (CI/CD) service.
  - Each project comes pre-configured with an automated pipeline
    that continuously builds, tests, and deploys your code with 
    each commit.
- PlantUML sprites, macros, stereotypes, and other goodies for
  creating PlantUML diagrams with AWS components.
API Gateway
- Create, maintain, and secure APIs at any scale
- fully managed service that makes it easy for
  developers to create, publish, maintain, monitor, and secure APIs.
- APIs act as the "front door" for applications .
- Using API Gateway, you can create RESTful APIs and WebSocket APIs that enable
  real-time two-way communication applications.
- supports containerized and serverless workloads, as well as web applications.
- Hides developer from traffic management, CORS support, AAA, throttling,
  monitoring, and API version management.
- No minimum fees or startup costs!!!.
  pay for the API calls you receive and the amount of data transferred out

Labmda Authorizer @[] - As the name suggests a Lambda Authorizer placed on an Amazon API Gateway can provide authorization on requests, but did you know it can also enrich the request with additional session information?
AWS: Non Classified
Chaos Engineering

Varun Jewalikar, Software Engineer at Prime Video, and Adrian
Hornsby, Principal Developer Advocate (Architecture) at AWS, write
that typical chaos engineering experiments include simulating
resource exhaustion and a failed or slow network. There are
countermeasures for such scenarios but "they are rarely adequately
tested, as unit or integration tests generally can't validate them
with high confidence".

AWS Systems Manager is a tool that can perform various operational
tasks across AWS resources with an agent component called SSM Agent.
The agent - pre-installed by default on certain Windows and Linux
AMIs - has the concept of "Documents" which are similar to runbooks
that can be executed. It can run simple shell scripts too, a feature
leveraged by the AWSSSMChaosRunner. The SendCommand API in SSM
enables running commands across multiple instances, which can be
filtered by AWS tags. CloudWatch can be used to view logs from all
the instances in a single place.

Scaleway (v0.1)
- Scaleway console :
- API documentation:
C14 Cold Storage
Block Storage:
- powered by SSDs offering 5,000 IOPS ºmonth price:º€0.08/GB
- public beta (2019).
- 99.99% SLA
- full replication of data.
Scaleway 2020-02-28:
  74 GB Free, then Eu0.01/GB
the world’s first alternative S3 Glacier.
fully integrated into our Object Storage!
 C14 Cold Storage is a S3-compatible cold storage. It lets you archive your
data in a fallout shelter located 25 meters underground in Paris. Both
archiving and restoring are totally free. Using C14 Cold Storage, you will
simply pay for the data stored at only €0.002/GB/month, a five-time lower
price than on Object Storage’s standard class
PostGIS extension on Mng DBs
Deploy Infra with Terraform
Tutorial showing how to:
- Install Terraform
- Connect Terraform to Scaleway cloud by creating an API Token
- Create a first Instance using Terraform
- Modify an Instance using Terraform
- Add resources to an infrastructure managed by Terraform
- Delete infrastructures using Terraform
Setup Moodle Learning Platform

Moodle is an open source Learning Management System (LMS) that
provides educators and students with a platform and various tools to
create and participate in collaborative online learning environments.
encrypt Object Storage (rclone)

Other Clouds
- provider of scalable, enterprise file storage in the cloud
Serverless OWASP TOP 10

Microsoft Ships Preview of Cluster-Friendly Cloud Disks
Azure Pipelines
- Automate builds and deployments.

- Build, test, and deploy Node.js, Python, Java, PHP, Ruby, C/C++,
  .NET, Android, and iOS apps. Run in parallel on Linux, macOS, and

- Deploy to Azure, AWS and GCP..

- Deploy to Azure, AWS and GCP.
Azure Government Governance

Microsoft recently expanded its Microsoft Learn platform with an
introductory class on Azure Government. Azure Government is
Microsoft's solution for hosting United States government solutions
on its cloud.

Government services have unique requirements in terms of security and
compliance, but public cloud solutions can provide scale, elasticity
and resilience that is not easy to be achieved with on premises
solutions. Available in eight U.S. regions, namely Chicago, Dallas,
New York, Phoenix, San Antonio, Seattle, Silicon Valley, and
Washington DC, it is specifically built for the U.S. government
needs. As such, it follows numerous compliance standards, from both
the U.S. and abroad (E.U., India, Australia, Chine, Singapore and
others). Some of the compliance standards are Level 5 Department of
Defence approval, FedRAMP High and DISA L4 (42 services) and L5 (45
services in scope). Microsoft Azure Government is operated using
completely separate physical infrastructure within Azure.
AD Node.js Lib

Windows Azure Active Directory Authentication Library (ADAL) for Node.js

The ADAL for node.js library makes it easy for node.js applications
to authenticate to AAD in order to access AAD protected web
resources. It supports 3 authentication modes shown in the quickstart
code below.
Azure Samples: AD
SVFS is a Virtual File System over Openstack Swift built upon fuse.
It is compatible with hubiC, OVH Public Cloud Storage and basically
every endpoint using a standard Openstack Swift setup. It brings a
layer of abstraction over object storage, making it as accessible and
convenient as a filesystem, without being intrusive on the way your
data is stored.
AWS availability zone
- In AWS, “local network” basically means “availability zone”. If two instances
  are in the same AWS availability zone, they can just put the MAC
  address of the target computer on it, and then the packet will get to
  the right place. It doesn’t matter what IP address is on the packet!
Openfit: ChatOps
Openfit: ChatOps with Slack and AWS Lambda
Pricing: S3 vs Az. vs B2
ADFS as Identity Provider for A.AD B2C
Google Service Directory
Google Introduces Service Directory to Manage All Your Services in One Place at Scale
In a recent blog post, Google introduced a new managed service on its
Cloud Platform (GCP) called Service Directory. With this service,
Google allows customers to publish, discover, and connect services
consistently and reliably, regardless of the environment and platform
where they reside.

Service Directory, currently available as beta, is a designed by
Google for looking up services. For its users, the service provides
real-time information about all their services in a single place,
allowing them to perform service inventory management at scale,
regardless of the number of endpoints.

Google Cloud software engineer Matt DeLoria and product manager
Karthik Balakrishnan, Service Directory, stated in the announcement
blog post:

    Service Directory reduces the complexity of management and
operations by providing unified visibility for all your services
across cloud and on-premises environments. And because Service
Directory is fully managed, you get enhanced service inventory
management at scale with no operational overhead, increasing the
productivity of your DevOps teams.


With Service Directory users can define services with metadata
allowing to group service together, while quickly making the
endpoints understood by their consumers and applications.
Furthermore, users can use the service to register different types of
services and resolve them securely over HTTP and gRPC. And finally,
for DNS clients they can leverage Service Directory's private DNS
zones, a feature that automatically updates DNS records as services

- founded in 2010[6] by Olivier Pomel and Alexis Lê-Quôc.
- Datadog uses a Go based agent, rewritten from scratch since its major
  to ofer a cloud infrastructure monitoring service,
  with a dashboard, alerting, and visualizations of metrics. As cloud
  adoption increased, Datadog grew rapidly and expanded its product
  offering to cover service providers including Amazon Web Services
  (AWS), Microsoft Azure, Google Cloud Platform, Red Hat OpenShift, and
- In 2015 Datadog announced the acquisition of Mortar Data,[8] bringing
  on its team and adding its data and analytics capabilities to
  Datadog's platform. That year Datadog also opened a research and
  development office in Paris,.[9] version 6.0.0 released on
  February 28, 2018.[2]
Terraform Cloud Dev Kit
AWS, HashiCorp, and Terrastack collaborated to release a preview of
the Cloud Development Kit (CDK) for Terraform, or cdktf. Developers
can use programming languages like Python or Typescript to manage
infrastructure as code. cdktf generates a Terraform configuration in
JSON that can deploy resources with a "terraform apply" command.
Also, cdktf supports any existing modules and providers from the
Terraform registry to deploy resources to AWS, Azure, or Google Cloud.
TOP serverless Vendors
What typical 100% Serverless Architecture looks like in AWS!
Graviton EC2

AWS provides various Amazon Elastic Compute Cloud (EC2) instances,
including a broad choice of Graviton2 processor-based, which allow
customers to optimize their workloads on performance and costs. The
latest addition to the Graviton2-based instances is the low cost
burstable general-purpose T4g instances.

In the past, AWS released EC2 T3g instances for customers to run
general-purpose workloads in a very cost-effective manner. Yet,
customers asked for instances that could run at increased peak
performance at a low cost. Hence, AWS announced the release of T4g
instances, a new generation of low-cost burstable instance type
powered by AWS Graviton2 - a processor custom-built by AWS using
64-bit Arm Neoverse cores.
Is the AWS Free Tier Really Free?

""... The AWS Free Tier is free in the same way that a table saw is
childproof. If you blindly rush in to use an AWS service with the
expectation that you won’t be charged, you’re likely to lose a
hand in the process...""

- three different offers depending on the product used:
- "always free":
  - 1 million requests per month on AWS Lambda
  - 25GB of storage on DynamoDB.
- "12 months free":
  - Amazon EC2
  - RDS
-  short-term "trials":
  - Amazon Inspector
  - GuardDuty.

- long term risk associated with this complexity:

"""..It seems pretty sensible to spin up your free EC2 instance in a
  private subnet—and then Rºyou're very reasonably surprised when you º
 Rº get charged almost $80 a month for the Managed NAT Gateway attached º
 Rº to that subnet. This has an unfortunate side effect of teaching º
 Rº beginners to use AWS services in ways that won't serve them well in º
 Rº corporate environments.º

 BºOracle, Azure, and GCP have all mastered this problem in a far moreº
 Bºcomprehensive, less user-hostile way.º

   Azure free account includes 12 months of popular services and $200
   credit, the Google Cloud free program offers 20+ products and $300
   credit. BºAn important difference with AWS is the ability to not be º
 Bºcharged until the user manually switches to a paid account.º

   Corey Quinn closes with an advice for users who receive unexpected bills:

    Open a ticket with AWS Support. ... Bº If it’s your first time with anº
  Bºoverage, they will almost universally waive the fee.º
- Build form-like apps in "minutes".
AzureRM Terraform Provider 2.0

 This release includes an overhaul of how two core resources are
described, an introduction of custom timeouts
Openfit: ChatOps with Slack and AWS Lambda

- The WYSIWYG of the Multi-Cloud. 
- Design, Deploy and Depict all resources across all cloud providers around the world. 

- Terraform file generated & versionned
  """ You can define variables to use in your products configuration, 
     visualise the resulting code and download it.  Or simply commit your 
     architecture using the integrated Git."""

- Worldwide overview of all your resources
  Visualize all cloud resources around the world for all cloud providers in ONE place.
- very fast S3 and local filesystem execution tool.
s5cmd supports wide range of object management tasks both for cloud 
storage services and local filesystems.
- List buckets and objects
- Upload, download or delete objects
- Move, copy or rename objects
- Set Server Side Encryption using AWS Key Management Service (KMS)
- Set Access Control List (ACL) for objects/files on the upload, copy, move.
- Print object contents to stdout
- Create buckets
- Summarize objects sizes, grouping by storage class
- Wildcard support for all operations
- Multiple arguments support for delete operation
- Command file support to run commands in batches at very high execution speeds
- Dry run support
- S3 Transfer Acceleration support
- Google Cloud Storage (and any other S3 API compatible service) support
- Structured logging for querying command outputs
- Shell auto-completion

AWS Cloud Shell
- Targeting administrators and developers .
- Includes aws  CLI v2 and Amazon Linux 2 OS
   installed and configured (Bash, zsh, PowerShell, editors,
   Git source control, and package management – npm/JS,
- When accessing the AWS CloudShell, user is 
- file uploads up to 1GB (to $HOME)ºpersisted between sessionsº. 
  (1 GB of persistent storage per AWS region)
-  root access provided.
- outbound connections allowed, inbound connections rejected.

"""...  The point of CloudShell is to easily use AWS CLI without setting 
  it up and setting the credentials; however, to use this from your own 
  terminal, it means you have to install software and then configure 
  credentials- well then that would be exactly the same as installing 
  AWS CLI and configuring it...
Bº    The main value is if you're on a machine that isn't your normal  º
Bºwork machine and you want quick access to the CLI without installing º
Bºthe CLI itself and adding your credentials.                      º

- Note: Microsoft and GCP offer Cloud Shell since 2017 (5GB persistence storage)

OºUp to 10 concurrent shells in each region at no chargeº:
 - unofficial AWS CloudShell plugin for VS Code: