Dockerized image with exporter for the tcce-library.
What is it good for?
With tcce we can query and generate easy
accessible entities from traefik’s consul acme storage.
The idea is to export your valid Let’s Encrypt certificates into a specific
directory to make it usable for other applications like
Mailserver
FTP-Server
LDAP
and all other services not covered by traefik
Installation
You will need a docker runtime to run containers of this image.
Usage
$ docker-compose up
In docker-compose.yml you can control the export via environment variables.
CRON_PATTERN: 28 3 * * *
Run the export of certificates each day at 03:28 AM. Cron patterns are accepted. For more detail have a look at jmettraux/rufus-scheduler
FIRST_IN: 10s
In development you likely do not want to wait a day for a export. Define a time period to wait for a single execution. For more detail have a look at jmettraux/rufus-scheduler
We need a Consul ACL Token to query the ACME object (see CONSUL_KV_PATH)
CONSUL_KV_PATH: traefik/acme/account/object
The Consul path to the acme account object written by traefik
CA_FILE: /usr/src/app/ca.crt
You can provide a CA-Certificate to communicate with your Consul server (see CONSUL_URL)
EXPORT_DIRECTORY: /export
Define a directory to export the certificates to. The directory should be mounted inside the container so that you can access you certificates externally
EXPORT_OVERWRITE: true
If traefik renews a certificate, you may want to overwrite the old one. true is the default
BUNDLE_CERTIFICATES: true
Usually you need certificate files with the intermediate certificate included. If not, you can set the generation to false
LOG_LEVEL: INFO
Control the log level to the container-console
TZ: Europe/Berlin
To schedule the correct time, you have to tell me in which timezone you reside to. For more detail have a look at jmettraux/rufus-scheduler
Development
After checking out the repo, you can run ruby main.rb or build a new docker image via
$ docker build -t ralfherzog/tcce .
and run it with
$ docker run -it ralfherzog/tcce
or with docker-compose
$ docker-compose build
$ docker-compose up
Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/RalfHerzog/docker-tcce. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the Contributor Covenant code of conduct.
License
The gem is available as open source under the terms of the MIT License.
Code of Conduct
Everyone interacting in the docker-tcce projectβs codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.
dbt workflow to load data into dim and fact tables, dbt docs
Steps for Recreation
Keeping ease of reproducibility at foremost priority I have avoided to choose dbt cloud and keep all application containerized. Incase one does not have enough resources (Min 8 GB of RAM) on local, they can follow the cloud deployment by provisioning a virtual machine on GCP cloud.
There is no publicly accessible instance running, follow one of the two approaches.
Java library for https://github.com/teco-kit/explorer.
Can be used to upload datasets as whole or incrementally.
Written in Java. Can be used in Android projects.
How to install
The library can be found in Maven Central.
Gradle
Add mavenCentral to you repositories if it is not alreaedy there
Upload datasets in increments with custom timestamps
Recorderrecorder = newRecorder("explorerBackendUrl", "deviceApiKey");
try {
IncrementalRecorderincRecorder = recorder.getIncrementalDataset("datasetName", false); // false to use custom timestamps// time should be a unix timestampincRecorder.addDataPoint(1595506316000L, "accX", 123);
// This will throw an UnsupportedOperationException because no timestamp was providedincRecorder.addDataPoint("accX", 124);
// Tells the libarary that all data has been recorded// Uploads all remaining datapoints to the serverincRecorder.onComplete();
}
} catch (Exceptione) {
e.printStackTrace();
}
Upload datasets in increments with timestamps from the device
Recorderrecorder = newRecorder("explorerBackendUrl", "deviceApiKey");
try {
IncrementalRecorderincRecorder = recorder.getIncrementalDataset("datasetName", true); // true to use deviceTimeincRecorder.addDataPoint("accX", 123);
// This will throw an UnsupportedOperationException because a timestamp was providedincRecorder.addDataPoint(1595506316000L, "accX", 123);
// Wait until all values have been sendincRecorder.onComplete();
} catch (Exceptione) {
e.printStackTrace();
}
We believe in a future in which the web is a preferred environment for numerical computation. To help realize this future, we’ve built stdlib. stdlib is a standard library, with an emphasis on numerical and scientific computation, written in JavaScript (and C) for execution in browsers and in Node.js.
The library is fully decomposable, being architected in such a way that you can swap out and mix and match APIs and functionality to cater to your exact preferences and use cases.
When you use stdlib, you can be absolutely certain that you are using the most thorough, rigorous, well-written, studied, documented, tested, measured, and high-quality code out there.
To join us in bringing numerical computing to the web, get started by checking us out on GitHub, and please consider financially supporting stdlib. We greatly appreciate your continued support!
mskmin
Calculate the minimum value of a strided array according to a mask.
Installation
npm install @stdlib/stats-base-mskmin
Alternatively,
To load the package in a website via a script tag without installation and bundlers, use the ES Module available on the esm branch (see README).
If you are using Deno, visit the deno branch (see README for usage intructions).
The branches.md file summarizes the available branches and displays a diagram illustrating their relationships.
To view installation and usage instructions specific to each branch build, be sure to explicitly navigate to the respective README files on each branch, as linked to above.
Usage
varmskmin=require('@stdlib/stats-base-mskmin');
mskmin( N, x, strideX, mask, strideMask )
Computes the minimum value of a strided array x according to a mask.
mask: mask Array or typed array. If a mask array element is 0, the corresponding element in x is considered valid and included in computation. If a mask array element is 1, the corresponding element in x is considered invalid/missing and excluded from computation.
strideMask: stride length for mask.
The N and stride parameters determine which elements in the strided arrays are accessed at runtime. For example, to compute the minimum value of every other element in x,
The function has the following additional parameters:
offsetX: starting index for x.
offsetMask: starting index for mask.
While typed array views mandate a view offset based on the underlying buffer, the offset parameters support indexing semantics based on starting indices. For example, to calculate the minimum value for every other value in x starting from the second value
@stdlib/stats-strided/smskmin: calculate the minimum value of a single-precision floating-point strided array according to a mask.
Notice
This package is part of stdlib, a standard library for JavaScript and Node.js, with an emphasis on numerical and scientific computing. The library provides a collection of robust, high performance libraries for mathematics, statistics, streams, utilities, and more.
For more information on the project, filing bug reports and feature requests, and guidance on how to develop stdlib, see the main project repository.
Wildfire Heat Map Generation with Twitter and BERT
This repository contains the code and resources for generating wildfire heat maps of Portugal using Twitter data and a fine-tuned BERT language model. The project was initially developed for the RECPAD 2022 conference, where it was chosen as one of the top 4 papers. It also serves as the repository for the extended version of the paper, which will be published soon. The system can easily be extended to work with other countries or languages.
Description
The goal of this project is to extract pertinent information from social media posts during fire events and create a heat map indicating the most probable fire locations. The pipeline consists of the following steps:
Data Collection: Obtain fire-related tweets from Twitter using the SNScrape API, filtering for Portuguese language and keywords like “fogo” and “incΓͺndio” (“fire” and “wildfire”).
Classification: Use a finetuned BERT instance to classify and filter out tweets that are not fire reports.
Geoparsing: Extract fire locations from tweets through Named Entity Recognition (NER), concatenating recognized location names to form a preliminary geocode, and retrieving the corresponding region geometry using the Nominatim API.
Intersection Detection: Identify intersections between extracted fire report regions and a predefined area of interest (e.g., Portugal), calculate intersection counts, and generate a heat map to visualize regions with a higher volume of fire occurences.
The resulting heat maps can be useful in allocating firefighting resources effectively. The system is easily adaptable to work with other countries or languages as long as compatible BERT and NER models are available.
Usage
To use this project, follow the instructions below:
Install the required libraries by running the following command in the root directory of the project.
python -r requirements.txt
Generate a heatmap for a specific date using the following command:
python src/heatmap.py <date>
Replace <date> with the desired date in the format yyyy-mm-dd.
Examples
Here are a few examples of the wildfire heat maps generated using our system. These maps correspond to the dates June 18, 2017; July 3, 2019; and August 7, 2022. The first image specifically highlights the notable fires in PedrΓ³gΓ£o Grande, Leiria, which our method was able to identify and depict on the map.
If you use this project or find it helpful for your research, please consider citing the following paper:
JoΓ£o Cabral Pinto, Hugo GonΓ§alo Oliveira, Catarina Silva, Alberto Cardoso, “Using Twitter Data and Natural Language Processing to Generate Wildfire Heat Maps”, 28th Portuguese Conference on Pattern Recognition (RECPAD 2022), 2022.
Escape the Yeti by traveling another 2000 m from the point at which the monster gives chase, creating a loop and starting over from the beginning.
One way to evade it is to go directly left or right in fast mode with the “F” key. He is right behind you, but cannot catch you unless you hit an obstacle.
I’m on Windows, how do I run this?
You’re on Windows?? Dude, you don’t need WINE.. You don’t even need Docker!
But I love WSL (Windows Subsystem for Linux), can I run this there anyway? You know, for science!
Okay, sure. Just remember WSL1 and WSL2 don’t have GPU acceleration so don’t expect great framerates.
You can install an X server, such as “GWSL” (free from the Windows Store.)
Once it’s running, add this to your ~/.bashrc to expose your DISPLAY:
# WSL [Windows Subsystem for Linux] customizations:if [[ $(grep microsoft /proc/version) ]];then# If we are here, we are under WSL:if [ $(grep -oE 'gcc version ([0-9]+)' /proc/version | awk '{print $3}')-gt 5 ];then# WSL2
[ -z$DISPLAY ] &&export DISPLAY=$(cat /etc/resolv.conf | grep nameserver | awk '{print $2}'):0.0
else# WSL1
[ -z$DISPLAY ] &&export DISPLAY=127.0.0.1:0.0
fifi
Log out and back in from a fresh terminal.
X forwarding should work now. Go on and try xeyes or the full skifree command from the top of this README.
If you succeeded, good job! You now made a 32bit Windows exe run in a Linux container from inside a Linux Windows subsystem. (You crazy maniac, you!)
But really, why did you make this?
I was reading about X forwarding in Docker containers and wanted to put it to practice.
This exercise taught me that --user in docker run with your own user ID lets you tap into your active X session easily without file ownership issues or X security errors.
This project was created to automate my personal homelab, following GitOps principles.
This is not a framework. However, you can customize and extend it in any way you want.
β οΈ Since this a personal homelab, all encrypted credentials only applies to my environment. Generate your own secrets.
π‘ What is a homelab?
“Simply put, a home lab consists of one or more servers (or normal PCs acting as servers), that you have in your home and you use them to experiment and try out stuff.” –techie-show
A lightweight C11 RISC-V (RV32/64[I|M|F]) userspace emulator designed for embedded scripting.
Originally written as the backend for the Nano game engine’s scripting system.
No external dependencies, and no implicit depedency on any runtime/LibC (unless specified through build options).
Bases/extensions that are currently supported.
RV32I
RV64I
RV32M/RV64M
Zicsr
Counter CSRs
RV32F/RV64F
Bases/extensions that are a work in progress.
RV32D/RV64D
RV32A/AMO
Building
NanoRV is intended to be directly embedded in an existing project.
Build configuration is specified through a set of RV_ prefixed preprocessor macros. For a full listing of supported build options, view nanorv_config.h.
Use the RV_OPT_INCLUDE_CONFIG preprocessor definition to have NanoRV include a config file (nanorv_config.h) containing your build configuration,
or specify all your build configuration options project-wide through the compiler’s preprocessor definition options.
Example
#include"nanorv.h"#include<stdio.h>VOIDRvTest(
VOID
)
{
RV_SIZE_Ti;
RV_PROCESSORVp;
RV_UINT32VpMemory[ 1024 ] =
{
// 0000000000000000 <_start>:0xb7100000, // 0: lui ra,0x10x13010010, // 4: li sp,2560x63441100, // 8: blt sp,ra,10 0x6f00c000, // c: j 18 // 0000000000000010 :0x93012000, // 10: li gp,20x6f008000, // 14: j 1c // 0000000000000018:0x93011000, // 18: li gp,1// 000000000000001c :0x33c23000, // 1c: xor tp,ra,gp0x73001000, // 20: ebreak
};
//// Set up initial processor context.//Vp= ( RV_PROCESSOR ) {
/* Flat span of host memory to pass to the guest. */
.MmuVaSpanHostBase=VpMemory,
.MmuVaSpanSize=sizeof( VpMemory ),
/* Begin the flat span of memory at guest effective address 0. */
.MmuVaSpanGuestBase=0,
/* Begin executing at guest effective address 0. */
.Pc=0
};
//// Execute code until an exception is triggered, or an EBREAK instruction is hit.//while( Vp.EBreakPending==0 ) {
//// Execute a tick; Fetches, decodes, and executes an instruction.//RvpTickExecute( &Vp );
//// Print exception information.//if( Vp.ExceptionPending ) {
printf( "RV exception %i @ PC=0x%llx\n", ( RV_INT )Vp.ExceptionIndex, ( RV_UINT64 )Vp.Pc );
break;
}
}
//// Dump all general-purpose registers.//for( i=0; i<RV_COUNTOF( Vp.Xr ); i++ ) {
printf( "x%i = 0x%llx\n", ( RV_INT )i, ( RV_UINT64 )Vp.Xr[ i ] );
}
//// Dump program-counter register.//printf( "pc = 0x%llx\n", ( RV_UINT64 )Vp.Pc );
}
Ansible Role to deploy apps in root-less containers from a Kubernetes Pod YAML definition. The application pod runs as a systemd service using Podman Quadlet, in your own user namespace.
π Key Features
π Deploy Any Application: Easily deploy any application using a Kubernetes YAML pod definition.
π‘οΈ Root-less deployment: Ensure secure containerization by running custom applications in a root-less mode within a user namespace. Management of the container is handled through a Quadlet systemd unit.
π Idempotent deployment: Role embraces idempotent deployment, ensuring that the state of your deployment always matches your desired inventory.
Tested on RHEL/RockyLinux 9 and Fedora but should work with compatible distributions.
Ensure that the podman and loginctl binaries are present on the target system.
If the following Ansible collections are not already available in your environment, please install them: ansible-galaxy collection install ansible.posix and ansible-galaxy collection install containers.podman.
Default application root directory where configuration files, Kubernetes pod YAML definitions, and other directories are stored. If not specified, it uses home of the user who executed the playbook.
Default path where your custom application configs are templated from the podman_play_custom_conf variable.
podman_play_pod_state: "quadlet"
Ensure that the pod is in the quadlet state. This ensures that the Quadlet file is generated in the user namespace.
podman_play_pod_recreate: true
This ensures that any change in the configuration file or Kubernetes pod YAML definition triggers pod recreation to apply the latest changes, such as an image tag change.
Required Variables
The following variables are not set by default, but they are required for deployment. You will need to define these variables. Below are example values.
Define the Kubernetes pod YAML definition to be used by the podman_play module for deployment. For more details, refer to the Kubernetes pod documentation.
Optional Variables
These optional variables are not required and are not set by default. You can use these variables to extend your deployment. Below are example values.
podman_play_user: "dashy"
OS user that runs your pod app. If not specified, it uses the user who executed the playbook.
podman_play_group: "dashy"
OS group for the app user.
podman_play_custom_conf:
- filename: "conf.yml"raw_content: | # Example Raw Config for conf.yml
- filename: "another_config.conf"raw_content: | # Example Raw Config for another_config.conf
This variable allows you to deploy any number of configuration files for your deployment. Content is always templated into the podman_play_template_config_dir directory.
Create additional directories for your application. You can then mount these directories into your pod by defining the paths in the volumes section of podman_play_pod_yaml_definition.
podman_play_firewalld_expose_ports:
- "9500/tcp"
List of ports in port/tcp or port/udp format that should be exposed via firewalld.
podman_play_auto_update: false
If you’re using image tags without specific versions, such as latest or stable, you can enable the auto-update feature. However, to activate this feature, you need to annotate the pod YAML definition with io.containers.autoupdate: registry. Without this annotation, the auto-update won’t take effect. For more details on how it works, check out the documentation.
When set to false, the auto-update feature is disabled. This feature is disabled by default.
Additional variables related to the podman_play_module. Check the module documentation for possible values.
With these variables, you can modify pod deployment specifications.
Dependencies
No Dependencies.
Playbook
Example playbook to deploy your custom container app
- name: Manage your pod apphosts: yourhostgather_facts: trueroles:
- role: voidquark.podman_play
License
MIT
Contribution
Feel free to customize and enhance the role according to your needs. Your feedback and contributions are greatly appreciated. Please open an issue or submit a pull request with any improvements.