Blog

  • docker-tcce

    Docker-TCCE

    Dockerized image with exporter for the tcce-library.

    What is it good for?

    With tcce we can query and generate easy
    accessible entities from traefik’s consul acme storage.
    The idea is to export your valid Let’s Encrypt certificates into a specific
    directory to make it usable for other applications like

    • Mailserver
    • FTP-Server
    • LDAP
    • and all other services not covered by traefik

    Installation

    You will need a docker runtime to run containers of this image.

    Usage

    $ docker-compose up
    

    In docker-compose.yml you can control the export via environment variables.

    CRON_PATTERN: 28 3 * * *
    

    Run the export of certificates each day at 03:28 AM. Cron patterns are accepted. For more detail have a look at jmettraux/rufus-scheduler

    FIRST_IN: 10s
    

    In development you likely do not want to wait a day for a export. Define a time period to wait for a single execution. For more detail have a look at jmettraux/rufus-scheduler

    CONSUL_URL: http://dc1.consul:8300
    

    The URL to your running Consul-Server(s)

    CONSUL_ACL_TOKEN: xxxxxxxx-yyyy-zzzz-1111-222222222222
    

    We need a Consul ACL Token to query the ACME object (see CONSUL_KV_PATH)

    CONSUL_KV_PATH: traefik/acme/account/object

    The Consul path to the acme account object written by traefik

    CA_FILE: /usr/src/app/ca.crt
    

    You can provide a CA-Certificate to communicate with your Consul server (see CONSUL_URL)

    EXPORT_DIRECTORY: /export
    

    Define a directory to export the certificates to. The directory should be mounted inside the container so that you can access you certificates externally

    EXPORT_OVERWRITE: true
    

    If traefik renews a certificate, you may want to overwrite the old one. true is the default

    BUNDLE_CERTIFICATES: true
    

    Usually you need certificate files with the intermediate certificate included. If not, you can set the generation to false

    LOG_LEVEL: INFO
    

    Control the log level to the container-console

    TZ: Europe/Berlin
    

    To schedule the correct time, you have to tell me in which timezone you reside to. For more detail have a look at jmettraux/rufus-scheduler

    Development

    After checking out the repo, you can run ruby main.rb or build a new docker image via

    $ docker build -t ralfherzog/tcce .
    

    and run it with

    $ docker run -it ralfherzog/tcce
    

    or with docker-compose

    $ docker-compose build
    $ docker-compose up
    

    Contributing

    Bug reports and pull requests are welcome on GitHub at https://github.com/RalfHerzog/docker-tcce. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.

    License

    The gem is available as open source under the terms of the MIT License.

    Code of Conduct

    Everyone interacting in the docker-tcce project’s codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.

    Visit original content creator repository
    https://github.com/RalfHerzog/docker-tcce

  • homeassistant-lightwave-smart

    Visit original content creator repository
    https://github.com/LightwaveSmartHome/homeassistant-lightwave-smart

  • NYC-Restaurant-Inspection

    dbt docs

    NYC Food Inspection

    Problem Statement

    Explore NYC Food Inspections result over the period of 10 years to derive insights related to following

    • Inspection over the years
    • Inspection results
    • Which restaurants were most inspected
    • Which restaurants were involved in most violation
    • Other inferences as observed during the course of visualization

    Dataset

    DOHMH New York City Restaurant Inspection Results by NYC OpenData

    Architecture Diagram

    Data Modeling

    List of dimension and fact tables

    Dimension Fact
    dim_addresses fct_food_inspections
    dim_borough fct_foodinspection_violations
    dim_critical_flag
    dim_cuisine
    dim_food_places
    dim_inspection_actions
    dim_inspection_grades
    dim_inspection_type
    dim_violation_codes

    Data Loading

    dbt workflow to load data into dim and fact tables, dbt docs

    Steps for Recreation

    Keeping ease of reproducibility at foremost priority I have avoided to choose dbt cloud and keep all application containerized. Incase one does not have enough resources (Min 8 GB of RAM) on local, they can follow the cloud deployment by provisioning a virtual machine on GCP cloud.

    There is no publicly accessible instance running, follow one of the two approaches.

    1. Cloud deployment
    2. Local deployment

    Visualization

    arch

    Inference

    For no filter based on the date, summing up the conclusion

    • A total of 64k inspection carried out
    • Average score inspection were 20
    • 4k places have never been inspected
    • 208k violation are recorded
    • Manhattan has most number of places, 24k
    • 2% of the inspections have passed result
    • 8k places have been closed down due to severe violations
    • year 2022 had the most inspection of about 26k
    • January is the month when most inspection happen
    • Dunkin is the most inspected as well as most violated place
    • A major of the place are based on American cuisine followed by Chinese, Coffee and Pizza

    Source Code Reference

    .
    β”œβ”€β”€ Makefile
    β”œβ”€β”€ airflow
    β”‚Β Β  └── dags
    β”‚Β Β   Β Β  └── load_all_data.py
    β”œβ”€β”€ dbt_nyc
    β”‚Β Β  β”œβ”€β”€ dbt_project.yml
    β”‚Β Β  β”œβ”€β”€ models
    β”‚Β Β  β”‚Β Β  β”œβ”€β”€ core
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dim_addresses.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dim_borough.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dim_critical_flag.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dim_cuisine.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dim_food_places.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dim_inspection_actions.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dim_inspection_grades.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dim_inspection_type.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ dim_violation_codes.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ fact_food_inspections.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  β”œβ”€β”€ fact_foodinspection_violations.sql
    β”‚Β Β  β”‚Β Β  β”‚Β Β  └── schema.yml
    β”‚Β Β  β”‚Β Β  └── staging
    β”‚Β Β  β”‚Β Β      β”œβ”€β”€ load_stg_data.sql
    β”‚Β Β  β”‚Β Β      └── schema.yml
    β”‚Β Β  β”œβ”€β”€ packages.yml
    β”‚Β Β  └── profiles.yml
    β”œβ”€β”€ docker-compose.yaml
    β”œβ”€β”€ great_expectations
    β”‚Β Β  β”œβ”€β”€ checkpoints
    β”‚Β Β  β”‚Β Β  └── nyc_food_inspection_v05.yml
    β”‚Β Β  β”œβ”€β”€ expectations
    β”‚Β Β  β”‚Β Β  └── nyc_food_inspection_suite_v2.json
    β”‚Β Β  └── great_expectations.yml
    β”œβ”€β”€ requirements.txt
    β”œβ”€β”€ terraform
    β”‚Β Β  β”œβ”€β”€ install.sh
    β”‚Β Β  β”œβ”€β”€ main.tf
    β”‚Β Β  β”œβ”€β”€ output.tf
    β”‚Β Β  β”œβ”€β”€ terraform.tfvars
    β”‚Β Β  └── variables.tf
    └── user_data
        β”œβ”€β”€ init.sql
        └── metabase-2023-04-25.sql
    Visit original content creator repository https://github.com/piyush-an/NYC-Restaurant-Inspection
  • explorer-java

    explorer-java

    Tests Maven Central

    Java library for https://github.com/teco-kit/explorer. Can be used to upload datasets as whole or incrementally. Written in Java. Can be used in Android projects.

    How to install

    The library can be found in Maven Central.

    Gradle

    1. Add mavenCentral to you repositories if it is not alreaedy there
    repositories {
      mavenCentral()
    }
    1. Import the library
    dependencies {
      implementation 'edu.teco.explorer:ExplorerJava:${VERSION}
    }

    Maven

    Include the library as a dependency in pom.xml

    <dependencies>
      <dependency>
        <groupId>edu.teco.explorer</groupId>
        <artifactId>ExplorerJava</artifactId>
        <version>${VERSION}</version>
      </dependency>
    </dependencies>

    How to use

    Upload datasets as a whole

    Recorder recorder = new Recorder("explorerBackendUrl", "deviceApiKey");
    boolean res = recorder.sendDataset(JSONObject); // Dataset as JSONObject

    Upload datasets in increments with custom timestamps

    Recorder recorder = new Recorder("explorerBackendUrl", "deviceApiKey");
    try {
      IncrementalRecorder incRecorder = recorder.getIncrementalDataset("datasetName", false); // false to use custom timestamps
    
      // time should be a unix timestamp
      incRecorder.addDataPoint(1595506316000L, "accX", 123);
    
      // This will throw an UnsupportedOperationException because no timestamp was provided
      incRecorder.addDataPoint("accX", 124);
    
      // Tells the libarary that all data has been recorded
      // Uploads all remaining datapoints to the server
      incRecorder.onComplete();
    }
    } catch (Exception e) {
        e.printStackTrace();
    }

    Upload datasets in increments with timestamps from the device

    Recorder recorder = new Recorder("explorerBackendUrl", "deviceApiKey");
    try {
      IncrementalRecorder incRecorder = recorder.getIncrementalDataset("datasetName", true); // true to use deviceTime
    
    
      incRecorder.addDataPoint("accX", 123);
    
      // This will throw an UnsupportedOperationException because a timestamp was provided
      incRecorder.addDataPoint(1595506316000L, "accX", 123);
    
      // Wait until all values have been send
      incRecorder.onComplete();
    } catch (Exception e) {
        e.printStackTrace();
    }
    Visit original content creator repository https://github.com/teco-kit/explorer-java
  • stats-base-mskmin

    About stdlib…

    We believe in a future in which the web is a preferred environment for numerical computation. To help realize this future, we’ve built stdlib. stdlib is a standard library, with an emphasis on numerical and scientific computation, written in JavaScript (and C) for execution in browsers and in Node.js.

    The library is fully decomposable, being architected in such a way that you can swap out and mix and match APIs and functionality to cater to your exact preferences and use cases.

    When you use stdlib, you can be absolutely certain that you are using the most thorough, rigorous, well-written, studied, documented, tested, measured, and high-quality code out there.

    To join us in bringing numerical computing to the web, get started by checking us out on GitHub, and please consider financially supporting stdlib. We greatly appreciate your continued support!

    mskmin

    NPM version Build Status Coverage Status

    Calculate the minimum value of a strided array according to a mask.

    Installation

    npm install @stdlib/stats-base-mskmin

    Alternatively,

    • To load the package in a website via a script tag without installation and bundlers, use the ES Module available on the esm branch (see README).
    • If you are using Deno, visit the deno branch (see README for usage intructions).
    • For use in Observable, or in browser/node environments, use the Universal Module Definition (UMD) build available on the umd branch (see README).

    The branches.md file summarizes the available branches and displays a diagram illustrating their relationships.

    To view installation and usage instructions specific to each branch build, be sure to explicitly navigate to the respective README files on each branch, as linked to above.

    Usage

    var mskmin = require( '@stdlib/stats-base-mskmin' );

    mskmin( N, x, strideX, mask, strideMask )

    Computes the minimum value of a strided array x according to a mask.

    var x = [ 1.0, -2.0, -4.0, 2.0 ];
    var mask = [ 0, 0, 1, 0 ];
    
    var v = mskmin( x.length, x, 1, mask, 1 );
    // returns -2.0

    The function has the following parameters:

    • N: number of indexed elements.
    • x: input Array or typed array.
    • strideX: stride length for x.
    • mask: mask Array or typed array. If a mask array element is 0, the corresponding element in x is considered valid and included in computation. If a mask array element is 1, the corresponding element in x is considered invalid/missing and excluded from computation.
    • strideMask: stride length for mask.

    The N and stride parameters determine which elements in the strided arrays are accessed at runtime. For example, to compute the minimum value of every other element in x,

    var x = [ 1.0, 2.0, -7.0, -2.0, 4.0, 3.0, -5.0, -6.0 ];
    var mask = [ 0, 0, 0, 0, 0, 0, 1, 1 ];
    
    var v = mskmin( 4, x, 2, mask, 2 );
    // returns -7.0

    Note that indexing is relative to the first index. To introduce offsets, use typed array views.

    var Float64Array = require( '@stdlib/array-float64' );
    var Uint8Array = require( '@stdlib/array-uint8' );
    
    var x0 = new Float64Array( [ 2.0, 1.0, -2.0, -2.0, 3.0, 4.0, 5.0, 6.0 ] );
    var x1 = new Float64Array( x0.buffer, x0.BYTES_PER_ELEMENT*1 ); // start at 2nd element
    
    var mask0 = new Uint8Array( [ 0, 0, 0, 0, 0, 0, 1, 1 ] );
    var mask1 = new Uint8Array( mask0.buffer, mask0.BYTES_PER_ELEMENT*1 ); // start at 2nd element
    
    var v = mskmin( 4, x1, 2, mask1, 2 );
    // returns -2.0

    mskmin.ndarray( N, x, strideX, offsetX, mask, strideMask, offsetMask )

    Computes the minimum value of a strided array according to a mask and using alternative indexing semantics.

    var x = [ 1.0, -2.0, -4.0, 2.0 ];
    var mask = [ 0, 0, 1, 0 ];
    
    var v = mskmin.ndarray( x.length, x, 1, 0, mask, 1, 0 );
    // returns -2.0

    The function has the following additional parameters:

    • offsetX: starting index for x.
    • offsetMask: starting index for mask.

    While typed array views mandate a view offset based on the underlying buffer, the offset parameters support indexing semantics based on starting indices. For example, to calculate the minimum value for every other value in x starting from the second value

    var x = [ 2.0, 1.0, -2.0, -2.0, 3.0, 4.0, -5.0, -6.0 ];
    var mask = [ 0, 0, 0, 0, 0, 0, 1, 1 ];
    
    var v = mskmin.ndarray( 4, x, 2, 1, mask, 2, 1 );
    // returns -2.0

    Notes

    • If N <= 0, both functions return NaN.
    • Depending on the environment, the typed versions (dmskmin, smskmin, etc.) are likely to be significantly more performant.
    • Both functions support array-like objects having getter and setter accessors for array element access (e.g., @stdlib/array-base/accessor).

    Examples

    var uniform = require( '@stdlib/random-array-uniform' );
    var bernoulli = require( '@stdlib/random-array-bernoulli' );
    var mskmin = require( '@stdlib/stats-base-mskmin' );
    
    var x = uniform( 10, -50.0, 50.0, {
        'dtype': 'float64'
    });
    console.log( x );
    
    var mask = bernoulli( x.length, 0.2, {
        'dtype': 'uint8'
    });
    console.log( mask );
    
    var v = mskmin( x.length, x, 1, mask, 1 );
    console.log( v );

    See Also


    Notice

    This package is part of stdlib, a standard library for JavaScript and Node.js, with an emphasis on numerical and scientific computing. The library provides a collection of robust, high performance libraries for mathematics, statistics, streams, utilities, and more.

    For more information on the project, filing bug reports and feature requests, and guidance on how to develop stdlib, see the main project repository.

    Community

    Chat


    License

    See LICENSE.

    Copyright

    Copyright Β© 2016-2025. The Stdlib Authors.

    Visit original content creator repository https://github.com/stdlib-js/stats-base-mskmin
  • wildfire-heat-map-generation

    Wildfire Heat Map Generation with Twitter and BERT

    This repository contains the code and resources for generating wildfire heat maps of Portugal using Twitter data and a fine-tuned BERT language model. The project was initially developed for the RECPAD 2022 conference, where it was chosen as one of the top 4 papers. It also serves as the repository for the extended version of the paper, which will be published soon. The system can easily be extended to work with other countries or languages.

    Description

    The goal of this project is to extract pertinent information from social media posts during fire events and create a heat map indicating the most probable fire locations. The pipeline consists of the following steps:

    1. Data Collection: Obtain fire-related tweets from Twitter using the SNScrape API, filtering for Portuguese language and keywords like “fogo” and “incΓͺndio” (“fire” and “wildfire”).

    2. Classification: Use a finetuned BERT instance to classify and filter out tweets that are not fire reports.

    3. Geoparsing: Extract fire locations from tweets through Named Entity Recognition (NER), concatenating recognized location names to form a preliminary geocode, and retrieving the corresponding region geometry using the Nominatim API.

    4. Intersection Detection: Identify intersections between extracted fire report regions and a predefined area of interest (e.g., Portugal), calculate intersection counts, and generate a heat map to visualize regions with a higher volume of fire occurences.

    The resulting heat maps can be useful in allocating firefighting resources effectively. The system is easily adaptable to work with other countries or languages as long as compatible BERT and NER models are available.

    Usage

    To use this project, follow the instructions below:

    1. Install the required libraries by running the following command in the root directory of the project.

      python -r requirements.txt
    2. Generate a heatmap for a specific date using the following command:

      python src/heatmap.py <date>

      Replace <date> with the desired date in the format yyyy-mm-dd.

    Examples

    Here are a few examples of the wildfire heat maps generated using our system. These maps correspond to the dates June 18, 2017; July 3, 2019; and August 7, 2022. The first image specifically highlights the notable fires in PedrΓ³gΓ£o Grande, Leiria, which our method was able to identify and depict on the map.

    heatmap heatmap heatmap

    License

    This project is licensed under the MIT License.

    Citation

    If you use this project or find it helpful for your research, please consider citing the following paper:

    JoΓ£o Cabral Pinto, Hugo GonΓ§alo Oliveira, Catarina Silva, Alberto Cardoso, “Using Twitter Data and Natural Language Processing to Generate Wildfire Heat Maps”, 28th Portuguese Conference on Pattern Recognition (RECPAD 2022), 2022.

    Visit original content creator repository https://github.com/cabralpinto/wildfire-heat-map-generation
  • docker-skifree

    SkiFree in Docker

    What is it?

    The classic 90s game SkiFree running in WINE via X from your host Linux system.

    The Yeti that eats you.

    How to Run

    To run the Dockerhub image, you will need to pass your $DISPLAY env var, match your user ID and mount your X socket, like so:

    docker run -it --rm -e DISPLAY=$DISPLAY --user `id -u` -v="/tmp/.X11-unix:/tmp/.X11-unix" alanf/skifree-wine

    Which version of the game is this?

    It runs the most officialest 32bit build from the original website.

    But… why?!

    I was so preoccupied with whether I could, that I didn’t stop to think if I should…

    Kidding aside, unlike the original from Windows 3.1 (that you can still run in DOSBOX online), this one scales to the biggest your screen can fit.

    Here it is running in my laptop’s high-DPI display at glorious 1792×1696 resolution:

    start screen

    and in action:

    skiing

    Can I escape the Yeti?!

    Yes, it’s possible.

    xkcd

    Escape the Yeti by traveling another 2000 m from the point at which the monster gives chase, creating a loop and starting over from the beginning.

    One way to evade it is to go directly left or right in fast mode with the “F” key. He is right behind you, but cannot catch you unless you hit an obstacle.

    I’m on Windows, how do I run this?

    You’re on Windows?? Dude, you don’t need WINE.. You don’t even need Docker!

    Just run the actual executable.

    But I love WSL (Windows Subsystem for Linux), can I run this there anyway? You know, for science!

    Okay, sure. Just remember WSL1 and WSL2 don’t have GPU acceleration so don’t expect great framerates.

    1. You can install an X server, such as “GWSL” (free from the Windows Store.)

    2. Once it’s running, add this to your ~/.bashrc to expose your DISPLAY:

    # WSL [Windows Subsystem for Linux] customizations:
    if [[ $(grep microsoft /proc/version) ]]; then
        # If we are here, we are under WSL:
        if [ $(grep -oE 'gcc version ([0-9]+)' /proc/version | awk '{print $3}') -gt 5 ]; then
            # WSL2
            [ -z $DISPLAY ] && export DISPLAY=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2}'):0.0
        else
            # WSL1
            [ -z $DISPLAY ] && export DISPLAY=127.0.0.1:0.0
        fi
    fi
    1. Log out and back in from a fresh terminal.
    2. X forwarding should work now. Go on and try xeyes or the full skifree command from the top of this README.

    If you succeeded, good job! You now made a 32bit Windows exe run in a Linux container from inside a Linux Windows subsystem. (You crazy maniac, you!)

    But really, why did you make this?

    I was reading about X forwarding in Docker containers and wanted to put it to practice.

    This exercise taught me that --user in docker run with your own user ID lets you tap into your active X session easily without file ownership issues or X security errors.


    Enjoy! (and don’t let the Yeti get ya!!)

    Visit original content creator repository https://github.com/darkvertex/docker-skifree
  • homelab

    My Homelab

    license Lint YAML

    This project was created to automate my personal homelab, following GitOps principles.

    This is not a framework. However, you can customize and extend it in any way you want.

    ⚠️ Since this a personal homelab, all encrypted credentials only applies to my environment. Generate your own secrets.

    πŸ’‘ What is a homelab?

    “Simply put, a home lab consists of one or more servers (or normal PCs acting as servers), that you have in your home and you use them to experiment and try out stuff.” –techie-show

    Requirements

    Software

    • kubernetes-cli
    • helm
    • ansible
    • kompose
    • kustomize
    • age
    • sops
    • k9s – for cluster management

    Using MacBook, I installed these with Homebrew.

    Hardware

    • A laptop or desktop, for bootstrapping the cluster.
    • Mini PCs, like an Intel NUC, for the actual cluster.
    • A Linux OS installed on each mini PC. I use Ubuntu Server.

    Stack

    Core

    Logo Name Description Version
    K3s Lightweight Kubernetes v1.29.0+k3s1
    Flannel Layer 3 network fabric designed for Kubernetes
    MetalLB Network load-balancer implementation for Kubernetes
    Ingress Nginx Ingress controller for Kubernetes
    Ansible Automate bare metal provisioning and configuration
    Argo CD Declarative Continuous Deployment for Kubernetes
    Helm The Kubernetes Package Manager
    cert-manager Automatically provision and manage TLS certificates in Kubernetes
    NFS CSI driver Allows Kubernetes to access NFS server
    Longhorn Block storage system for Kubernetes
    Cilium eBPF-based Networking, Observability, Security solution β›΅ Optional
    Hubble Networking and security observability platform β›΅ Optional

    Getting Started

    1. Make sure you have ssh access to your servers.
    2. Change ansible_username in metal/group_vars/all.yml to your username that has server access.
    3. Using a command line, run:
    > make
    
    1. It will ask for user password and Cloudflare API Token. The token is needed to perform a DNS challenge with Lets Encrypt (TLS certificate generation).

    ❄️ You’re done! Yes, that’s the only command you’ll need. πŸ˜„

    Visit original content creator repository https://github.com/mjrealm/homelab
  • nanorv

    NanoRV

    A lightweight C11 RISC-V (RV32/64[I|M|F]) userspace emulator designed for embedded scripting.

    Originally written as the backend for the Nano game engine’s scripting system.

    No external dependencies, and no implicit depedency on any runtime/LibC (unless specified through build options).

    Bases/extensions that are currently supported.

    • RV32I
    • RV64I
    • RV32M/RV64M
    • Zicsr
    • Counter CSRs
    • RV32F/RV64F

    Bases/extensions that are a work in progress.

    • RV32D/RV64D
    • RV32A/AMO

    Building

    NanoRV is intended to be directly embedded in an existing project.

    Build configuration is specified through a set of RV_ prefixed preprocessor macros. For a full listing of supported build options, view nanorv_config.h.

    Use the RV_OPT_INCLUDE_CONFIG preprocessor definition to have NanoRV include a config file (nanorv_config.h) containing your build configuration,
    or specify all your build configuration options project-wide through the compiler’s preprocessor definition options.

    Example

    #include "nanorv.h"
    #include <stdio.h>
    
    VOID
    RvTest(
      VOID
      )
    {
      RV_SIZE_T    i;
      RV_PROCESSOR Vp;
      RV_UINT32    VpMemory[ 1024 ] =
      {
                    // 0000000000000000 <_start>:
        0xb7100000, //    0:  lui ra,0x1
        0x13010010, //    4:  li  sp,256
        0x63441100, //    8:  blt sp,ra,10 
        0x6f00c000, //    c:  j   18 
        
                    // 0000000000000010 :
        0x93012000, //   10:  li gp,2
        0x6f008000, //   14:  j  1c 
        
                    // 0000000000000018:
        0x93011000, //   18:  li gp,1
        
                    // 000000000000001c :
        0x33c23000, //   1c:  xor tp,ra,gp
        0x73001000, //   20:  ebreak
      };
      
      //
      // Set up initial processor context.
      //
      Vp = ( RV_PROCESSOR ) {
        /* Flat span of host memory to pass to the guest. */
        .MmuVaSpanHostBase  = VpMemory,
        .MmuVaSpanSize      = sizeof( VpMemory ),
        /* Begin the flat span of memory at guest effective address 0. */
        .MmuVaSpanGuestBase = 0, 
        /* Begin executing at guest effective address 0. */
        .Pc                 = 0
      };
      
      //
      // Execute code until an exception is triggered, or an EBREAK instruction is hit.
      //
      while( Vp.EBreakPending == 0 ) {
        //
        // Execute a tick; Fetches, decodes, and executes an instruction.
        //
        RvpTickExecute( &Vp );
        
        //
        // Print exception information.
        //
        if( Vp.ExceptionPending ) {
          printf( "RV exception %i @ PC=0x%llx\n", ( RV_INT )Vp.ExceptionIndex, ( RV_UINT64 )Vp.Pc );
          break;
        }
      }
      
      //
      // Dump all general-purpose registers.
      //
      for( i = 0; i < RV_COUNTOF( Vp.Xr ); i++ ) {
        printf( "x%i = 0x%llx\n", ( RV_INT )i, ( RV_UINT64 )Vp.Xr[ i ] );
      }
      
      //
      // Dump program-counter register.
      //
      printf( "pc = 0x%llx\n", ( RV_UINT64 )Vp.Pc );
    }

    Output

    x0 = 0x0
    x1 = 0x1000
    x2 = 0x100
    x3 = 0x2
    x4 = 0x1002
    ...
    pc = 0x24
    

    Visit original content creator repository
    https://github.com/dro/nanorv

  • podman_play

    Podman Play – Deploy Any App

    License

    Ansible Role to deploy apps in root-less containers from a Kubernetes Pod YAML definition. The application pod runs as a systemd service using Podman Quadlet, in your own user namespace.

    πŸ”‘ Key Features

    • πŸš€ Deploy Any Application: Easily deploy any application using a Kubernetes YAML pod definition.
    • πŸ›‘οΈ Root-less deployment: Ensure secure containerization by running custom applications in a root-less mode within a user namespace. Management of the container is handled through a Quadlet systemd unit.
    • πŸ”„ Idempotent deployment: Role embraces idempotent deployment, ensuring that the state of your deployment always matches your desired inventory.
    • 🧩 Flexible Configuration: Easily customize deployment configuration to match your specific requirements.

    Explore the simplicity of deploying popular applications such as Dashy, Nextcloud, Jellyfin, and Hashi Vault with this role in the blog post πŸ“’.

    Table of Content

    Requirements

    • Ansible 2.10+
    • Tested on RHEL/RockyLinux 9 and Fedora but should work with compatible distributions.
    • Ensure that the podman and loginctl binaries are present on the target system.
    • If the following Ansible collections are not already available in your environment, please install them: ansible-galaxy collection install ansible.posix and ansible-galaxy collection install containers.podman.

    Role Variables

    Default Variables – defaults/main.yml

    podman_play_root_dir: "/home/{{ podman_play_user | default(ansible_user_id) }}/{{ podman_play_pod_name }}"

    Default application root directory where configuration files, Kubernetes pod YAML definitions, and other directories are stored. If not specified, it uses home of the user who executed the playbook.

    podman_play_template_config_dir: "{{ podman_play_root_dir }}/template_configs"

    Default path where your custom application configs are templated from the podman_play_custom_conf variable.

    podman_play_pod_state: "quadlet"

    Ensure that the pod is in the quadlet state. This ensures that the Quadlet file is generated in the user namespace.

    podman_play_pod_recreate: true

    This ensures that any change in the configuration file or Kubernetes pod YAML definition triggers pod recreation to apply the latest changes, such as an image tag change.

    Required Variables

    The following variables are not set by default, but they are required for deployment. You will need to define these variables. Below are example values.

    podman_play_pod_name: "dashy"

    Specify your application pod name.

    podman_play_pod_quadlet_options:
      - "[Install]"
      - "WantedBy=multi-user.target default.target"

    These default Quadlet options ensure that the service starts on boot.

    podman_play_pod_yaml_definition: |
      ---
      apiVersion: v1
      kind: Pod
      metadata:
        labels:
          app: "{{ podman_play_pod_name }}"
        name: "{{ podman_play_pod_name }}"
      spec:
        containers:
          - name: "{{ podman_play_pod_name }}"
            image: docker.io/lissy93/dashy:latest
            ports:
              - containerPort: 80
                hostPort: 9500
            stdin: true
            tty: true
            volumeMounts:
              - mountPath: /app/public/conf.yml:Z
                name: dashy_config
        volumes:
          - hostPath:
              path: "{{ podman_play_template_config_dir }}/conf.yml"
              type: File
            name: dashy_config

    Define the Kubernetes pod YAML definition to be used by the podman_play module for deployment. For more details, refer to the Kubernetes pod documentation.

    Optional Variables

    These optional variables are not required and are not set by default. You can use these variables to extend your deployment. Below are example values.

    podman_play_user: "dashy"

    OS user that runs your pod app. If not specified, it uses the user who executed the playbook.

    podman_play_group: "dashy"

    OS group for the app user.

    podman_play_custom_conf:
      - filename: "conf.yml"
        raw_content: |
          # Example Raw Config for conf.yml
      - filename: "another_config.conf"
        raw_content: |
          # Example Raw Config for another_config.conf

    This variable allows you to deploy any number of configuration files for your deployment. Content is always templated into the podman_play_template_config_dir directory.

    podman_play_dirs:
      - "{{ podman_play_root_dir }}/var_www_html"
      - "{{ podman_play_root_dir }}/var_lib_mysql"

    Create additional directories for your application. You can then mount these directories into your pod by defining the paths in the volumes section of podman_play_pod_yaml_definition.

    podman_play_firewalld_expose_ports:
      - "9500/tcp"

    List of ports in port/tcp or port/udp format that should be exposed via firewalld.

    podman_play_auto_update: false

    If you’re using image tags without specific versions, such as latest or stable, you can enable the auto-update feature. However, to activate this feature, you need to annotate the pod YAML definition with io.containers.autoupdate: registry. Without this annotation, the auto-update won’t take effect. For more details on how it works, check out the documentation. When set to false, the auto-update feature is disabled. This feature is disabled by default.

    podman_play_pod_authfile: ""
    podman_play_pod_build: ""
    podman_play_pod_cert_dir: ""
    podman_play_pod_configmap: ""
    podman_play_pod_context_dir: ""
    podman_play_pod_debug: ""
    podman_play_pod_executable: ""
    podman_play_pod_log_driver: ""
    podman_play_pod_log_level: ""
    podman_play_pod_network: ""
    podman_play_pod_password: ""
    podman_play_pod_username: ""
    podman_play_pod_quiet: ""
    podman_play_pod_seccomp_profile_root: ""
    podman_play_pod_tls_verify: ""
    podman_play_pod_userns: ""
    podman_play_pod_quadlet_dir: ""

    Additional variables related to the podman_play_module. Check the module documentation for possible values. With these variables, you can modify pod deployment specifications.

    Dependencies

    No Dependencies.

    Playbook

    • Example playbook to deploy your custom container app
    - name: Manage your pod app
      hosts: yourhost
      gather_facts: true
      roles:
        - role: voidquark.podman_play

    License

    MIT

    Contribution

    Feel free to customize and enhance the role according to your needs. Your feedback and contributions are greatly appreciated. Please open an issue or submit a pull request with any improvements.

    Author Information

    Created by VoidQuark

    Visit original content creator repository https://github.com/voidquark/podman_play