MooMoo.js is a powerful, open-source API for modding the popular web-based game MooMoo.io. It allows developers to easily create and implement mods without the need for manually intercepting WebSocket messages.
Benefits of using MooMoo.js
Ease of Use: MooMoo.js takes care of the complexities of intercepting WebSocket messages, allowing developers to focus on creating their mods.
Powerful Functionality: The API provides a wide range of features, including packet intercepting, player data manipulation, and more.
Flexibility: MooMoo.js allows** developers to create both client-side and server-side mods, providing a high level of customization.
Open-source: The API is open-source, allowing developers to freely use, modify, and distribute the code.
Features
Some of the key features of MooMoo.js include:
Packet Intercepting: The API allows developers to intercept both incoming and outgoing packets, providing the ability to modify or block them as needed.
Player Data Manipulation: The API allows developers to easily access and manipulate player data, such as coordinates, inventory, and more.
Built-in msgpack support: The API includes built-in support for the msgpack data format, making it easy to encode and decode packets.
Event system: The API allows developers to listen to events, such as player death, item pickup, and more.
Installation
MooMoo.js can be easily used in a Tampermonkey script. You can find the most recent version at Greasyfork.
Documentation
The MooMoo.js API is fully documented on the official website. It provides a detailed explanation of all the available features, as well as code examples to help developers get started.
In your wp-content/themes folder you will now have a folder with the name of your theme which is setup with the basics to get a theme off the ground quickly. In the root of your newly created theme you’ll have the following grunt tasks you can run:
grunt # runs the default task that builds the assets
grunt server # initiates Browsersync and watches files for changes
Sass
Global variables are located in /assets/sass/abstracts/_foundation-vars.scss
Javascript
All files in /assets/js/src/ are concatenated into the /assets/js/ directory.
Production
When you’re done and ready to go live you’ll need to minify your js and whatnot. You can do this by using:
grunt build
This will minify all your assets and copy the theme to a dist/ directory then compresses to a .zip.
Squire has the option to load child pages into the parent page along with an on page navigation. Simply select the “Multi Page” template and add child pages to the parent.
Applying data modeling with Apache Cassandra and build an ETL pipeline using Python. Additionally, modeling the data by creating tables in Apache Cassandra to run queries.
Project summary:
Applying data modeling with Apache Cassandra and building an ETL pipeline using Python.
Additionally, modeling the data by creating tables in Apache Cassandra to run queries.
Data Modeling with Apache Cassandra
For this project, we’ll be working with one dataset: event_data.
We will process the data set to create a denormalized dataset table. during the modeling process we will put the queries we want to run on our minds, to make the new dataset ready to get
query the needed information. We will create tables using Apache cassandra then we will load the data into the new created tables. after loading the data in the created tables we will run our queries to test.
Project Dataset
The data set is provided from Udacity, It has 11 columns
artist : Artist name [object]
firstName: First name of user [object]
gender: Gender of user (male or female) [object]
itemInSession: Item number in session [int64]
lastName: Last name of user [object]
length: Length of the song [float64]
level: Level (paid or free song) [object]
location: Location of the user [object]
sessionId: The unique ID of the session [int64]
song: Song title [object]
userId: User unique ID [int64]
(The data type is from the function panda.dtypes() . Pandas actually stores pointers to strings in data frames and series, which is why object
instead of str appears as the datatype. Understanding this is not essential – just know that strings will appear as objects in Pandas.).
There are no complicated project steps in this project. just run Project_1B_ Project_Template.ipynb
if there’s some issue with the first cells that means Udacity has changed their cursor and session configurations. but don’t worry the code is carefully commented and the logic still works fine.
You can pass several options to react-rainbow-ascii as props;
interfaceASCIIProps{text?: string// The text you want to render to ASCII. Default: 'Hello!'rainbow?: boolean// Whether you want the ASCII to be a rainbow. Default: truefallback?: string// Fallback HTML element to use for SEO. Default: 'pre'font?: figlet.Fonts// ASCII Figlet Font to use. Default: Slantid?: string// A unique id prevents multiple instances from conflicting. Default: null}
You can pass several options to react-rainbow-ascii as props;
interfaceASCIIProps{text?: string// The text you want to render to ASCII. Default: 'Hello!'rainbow?: boolean// Whether you want the ASCII to be a rainbow. Default: truefallback?: string// Fallback HTML element to use for SEO. Default: 'pre'font?: figlet.Fonts// ASCII Figlet Font to use. Default: Slantid?: string// A unique id prevents multiple instances from conflicting. Default: null}
App will fail miserably, crash and burn… It didn’t? What do you mean, it didn’t?! It’s supposed to fail. Well in that case, see step #3.i
Video will be downloaded in one of way ways, depending on which one is possible.
Note: Using the favoured approach, the video and audio streams will be downloaded as separate stream files. If you want them combined, specify an FFmpeg bin path in the app.config prior to doing the above.
Disclaimer
This software is provided as-is, as a proof-of-concept of the ability to find, parse, download and join video (and audio) segments from Vimeo. Note that this is provided for educational purposes through the analysis of code – rather than for actual use.
Note that this is a quick and dirty approach. As such look at the code for an idea as to how this is achieved rather than for the merit of the code itself. That being said, feel free to contribute and fix as you see fit.
Code assumes that the best stream quality should be downloaded. If you do not prefer this approach, it’s time to get your hands dirty. Simply change the stream ordering to something yoou would prefer instead.
It is very probable that the code is dysfunctional by the time you see this. Due to the nature of the project – a lot of dependencies and assumptions are made on third-party content / services outside of my control.
KubeDagger will act as a rootkit that leverages multiple eBPF features to implement offensive security techniques. We implemented most of the features you would expect from a rootkit: obfuscation techniques, container breakouts, persistent access, command and control, pivoting, network scanning, Runtime Application Self-Protection (RASP) bypass, etc.
The application herein is provided for educational purposes only and for those who are willing and curious to learn about ethical hacking, security and penetration testing with eBPF.
Do not attempt to use these tools to violate the law. The author is not responsible for any illegal action. Misuse of the provided information can result in criminal charges.
System requirements
golang 1.13+
This project was developed on an Ubuntu Focal machine (Linux Kernel 5.4)
Kernel headers are expected to be installed in lib/modules/$(uname -r) (see Makefile)
go-bindata (go get -u github.com/shuLhan/go-bindata/...)
Build
To build the entire project, run:
# ~ make
To install kubedagger-client (copies kubedagger-client to /usr/bin/), run:
# ~ make install_client
Getting started
KubeDagger contains the entire rootkit. It needs to run as root. Run sudo ./bin/kubedagger -h to get help. You can simply run sudo ./bin/kubedagger to start the rootkit with default parameters.
# ~ sudo ./bin/kubedagger -h
Usage:
kubedagger [flags]
Flags:
--append (file override feature only) when set, the content of the source file will be appended to the content of the target file
--comm string (file override feature only) comm of the process for which the file override should apply
--disable-bpf-obfuscation when set, kubedagger will not hide itself from the bpf syscall
--disable-network-probes when set, kubedagger will not try to load its network related probes
--docker string path to the Docker daemon executable (default "/usr/bin/dockerd")
-e, --egress string egress interface name (default "enp0s3")
-h, --help helpfor kubedagger
-i, --ingress string ingress interface name (default "enp0s3")
-l, --log-level string log level, options: panic, fatal, error, warn, info, debug or trace (default "info")
--postgres string path to the Postgres daemon executable (default "/usr/lib/postgresql/12/bin/postgres")
--src string (file override feature only) source file which content will be used to override the content of the target file
--target string (file override feature only) target file to override
-p, --target-http-server-port int Target HTTP server port used for Command and Control (default 8000)
--webapp-rasp string path to the webapp on which the RASP is installed
# ~ sudo ./bin/kubedagger
In order to use the client, you’ll need to have an HTTP server to enable the Command and Control feature of the rootkit. We provide a simple webapp that you can start by running ./bin/webapp. Run ./bin/webapp -h to get help.
# ~ ./bin/webapp -h
Usage of ./bin/webapp:
-ip string
ip on which to bind (default "0.0.0.0")
-port int
port to use for the HTTP server (default 8000)
# ~ ./bin/webapp
Once both kubedagger and the webapp are running, you can start using kubedagger-client. Run kubedagger-client -h to get help.
# ~ kubedagger-client -h
Usage:
kubedagger-client [command]
Available Commands:
docker Docker image override configuration
fs_watch file system watches
help Help about any command
network_discovery network discovery configuration
pipe_prog piped programs configuration
postgres postgresql authentication control
Flags:
-h, --help helpfor kubedagger-client
-l, --log-level string log level, options: panic, fatal, error, warn, info, debug or trace (default "info")
-t, --target string target application URL (default "http://localhost:8000")
Use "kubedagger-client [command] --help"for more information about a command.
Examples
This section contains only 3 examples. We invite you to watch our BlackHat USA 2021 and Defcon 29 talks to see a demo of all the features of the rootkit. For example, you’ll see how you can use Command and Control to change the passwords of a Postgresql database at runtime, or how we successfully hid the rootkit on the host.
We also demonstrate 2 container breakouts during our BlackHat talk, and a RASP bypass during our Defcon talk.
Exfiltrate passive network sniffing data
On startup, by default, the rookit will start listening passively for all the network connections made to and from the infected host. You can periodically poll that data using the network_discovery command of kubedagger-client. It may take a while to extract everything so be patient …
The final step is to generate the svg file. We used the fdp layout of Graphviz.
# ~ fdp -Tsvg /tmp/network-discovery-graph-453667534 > ./graphs/passive_network_discovery.svg
Run a port scan on 10.0.2.3, from port 7990 to 8010
Note: for this feature to work, you cannot run kubedagger-client locally. If you’re running the rootkit in a guest VM, expose the webapp port (default 8000) of the guest VM to the host and make the kubedagger-client request from the host.
To request a port scan, use the network_discovery command. You can specify the target IP, start port and port range.
On the infected host, you should see debug logs in /sys/kernel/debug/tracing/trace_pipe. For example, you should see the initial ARP request to resolve the MAC address of the target IP, and then a list of SYN requests to probe the ports from the requested range.
Once the scan is finished, you can exfiltrate the scan result using the network_discovery command. You need to add the active flag to request the network traffic generated by the network scan. It may take a while to extract everything so be patient …
The final step is to generate the svg file. We used the fdp layout of Graphviz.
# ~ fdp -Tsvg /tmp/network-discovery-graph-3064189396 > ./graphs/active_network_discovery.svg
Dump the content of /etc/passwd
This is a 3 steps process. First you need to ask the rootkit to start looking for /etc/passwd. You can use the fs_watch command of kubedagger-client to do that.
Then, you need to wait until a process on the infected host opens and reads /etc/passwd (run sudo su to simulate this step). The rootkit will copy the content of the file as it is sent back to the process by the kernel.
Finally, you can exfiltrate the content of the file using the fs_watch command again.
The idea of the library is to save dagger components and return them when they are needed.
Every component is saved in the static store and removed when the owner is going to be destroyed.
What’s new
2.1.0
*InjectionManagers have two new methods to find a component. The methods return null if a component was not found and no exceptions are thrown.
// finds a component by typeXInjectionManager
.findComponentOrNull<SomeComponent>()
?.someMethod()
// finds a component by predicateXInjectionManager
.findComponentOrNull { /* predicate */ }
?.someMethod()
The ComponentNotFoundException class that is inside me.vponomarenko.injectionmanager.exeptions package is deprecated, because the exeptions was misspelled, so use ComponentNotFoundException that is inside me.vponomarenko.injectionmanager.exceptions package. The new ComponentNotFoundException class is inherited from the old one.
2.0.1
If you use the *InjectionManager.findComponent() method and the component was not found, the ComponentNotFoundException will be more informative, beucase the type of the component will be printed.
//before
Caused by: me.vponomarenko.injectionmanager.exeptions.ComponentNotFoundException:
Component for the Function1<java.lang.Object, java.lang.Boolean> was not found
...
//after
Caused by: me.vponomarenko.injectionmanager.exeptions.ComponentNotFoundException:
Component of the FragmentChildB type was not found
...
But if you use the *InjectionManager.findComponent(predicate) method, the exception’s massage will be the same as it was in 2.0.0.
2.0.0
The main difference between the 2.0.0 version and the 1.1.0 version that the IHasComponent interface is a generic one. Therefore, you must specify the class of the component.
The following example will be for the AndroidX. If you want to use this library for the AppCompat packages, just change XInjectionManager to CompatInjectionManager.
First thing first, add the lifecycle callbacks listeners. At this step the library registers the lifecycle listener for the future activities and the fragments so the components that are bound to the activity or fragment will be destroyed right after the destruction of the owner.
For example, the FirstFragment (also it works for the activities too) has a component, so you must implement the IHasComponent interface and call the bindComponent method of the XInjectionManager class. When the component is bound, it is available for other classes, but make sure, that these classes will not live longer than the owner of the component.
If the fragment doesn’t have its own component and uses the AppComponent to inject the dependencies, just call the findComponent method and specify the class of the component and that is all.
You can clone the repository and use setuptools for the most up-to-date version:
git clone https://github.com/ApeWorX/ape-vyper.git
cd ape-vyper
python3 setup.py install
Quick Usage
First, place Vyper contract source files (files with extension .vy) in your Ape project’s contracts folder.
An example Vyper contract can be found here.
Then, from your root Ape project folder, run the command:
ape compile
The .vy files in your project will compile into ContractTypes that you can deploy and interact with in Ape.
Contract Flattening
For ease of publishing, validation, and some other cases it’s sometimes useful to “flatten” your contract into a single file.
This combines your contract and any imported interfaces together in a way the compiler can understand.
You can do so with a command like this:
ape vyper flatten contracts/MyContract.vy build/MyContractFlattened.vy
Warning
This feature is experimental. Please report any bugs you find when trying it out.
Compiler Version
By default, the ape-vyper plugin uses version pragma for version specification.
However, you can also configure the version directly in your pyproject.toml file:
[tool.vyper.version]
version = "0.3.7"
EVM Versioning
By default, ape-vyper will use whatever version of EVM rules are set as default in the compiler version that gets used,
or based on what the #pragma evm-version ... pragma comment specifies (available post-v0.3.10).
Sometimes, you might want to use a different version, such as deploying on Arbitrum or Optimism where new opcodes are not supported yet.
If you want to require a different version of EVM rules to use in the configuration of the compiler, set it in your ape-config.yaml like this:
[tool.ape.vyper]
evm_version = "paris"
NOTE: The config value chosen will not override if a pragma is set in a contract.
Interfaces
You can not compile interface source files directly.
Thus, you must place interface files in a directory named interfaces in your contracts_folder e.g. contracts/interfaces/IFace.vy.
Then, these files can be imported in other .vy sources files via:
importinterfaces.IFaceasIFace
Alternatively, use JSON interfaces from dependency contract types by listing them under the import_remapping key:
You can install versions of Vyper using the ape vyper vvm CLI tools.
List installed versions using:
ape vyper vvm list
To list the available Vyper versions, do:
ape vyper vvm list --available
Install more versions using the command:
ape vyper vvm install 0.3.7 0.3.10
Custom Output Format
To customize Vyper’s output format (like the native -f flag), you can configure the output format:
For example, to only get the ABI, do:
[tool.ape.vyper]
output_format = ["abi"]
To do this using the CLI only (adhoc), use the following command:
ape compile --config-override '{"vyper": {"output_format": ["abi"]}}'
Solc JSON Format
ape-vyper supports the socl_json format.
To use this format, configure ape-vyper like:
[tool.ape.vyper]
output_format = ["solc_json"]
Note: Normally, in Vyper, you cannot use solc_json with other formats.
However, ape-vyper handles this by running separately for the solc_json request.
Be sure to use the --force flag when compiling to ensure you get the solc JSON output.
ape compile file_needing_solc_json_format.vy -f
To get a dependency source file in this format, configure and compile the dependency.
You can clone the repository and use setuptools for the most up-to-date version:
git clone https://github.com/ApeWorX/ape-vyper.git
cd ape-vyper
python3 setup.py install
Quick Usage
First, place Vyper contract source files (files with extension .vy) in your Ape project’s contracts folder.
An example Vyper contract can be found here.
Then, from your root Ape project folder, run the command:
ape compile
The .vy files in your project will compile into ContractTypes that you can deploy and interact with in Ape.
Contract Flattening
For ease of publishing, validation, and some other cases it’s sometimes useful to “flatten” your contract into a single file.
This combines your contract and any imported interfaces together in a way the compiler can understand.
You can do so with a command like this:
ape vyper flatten contracts/MyContract.vy build/MyContractFlattened.vy
Warning
This feature is experimental. Please report any bugs you find when trying it out.
Compiler Version
By default, the ape-vyper plugin uses version pragma for version specification.
However, you can also configure the version directly in your pyproject.toml file:
[tool.vyper.version]
version = "0.3.7"
EVM Versioning
By default, ape-vyper will use whatever version of EVM rules are set as default in the compiler version that gets used,
or based on what the #pragma evm-version ... pragma comment specifies (available post-v0.3.10).
Sometimes, you might want to use a different version, such as deploying on Arbitrum or Optimism where new opcodes are not supported yet.
If you want to require a different version of EVM rules to use in the configuration of the compiler, set it in your ape-config.yaml like this:
[tool.ape.vyper]
evm_version = "paris"
NOTE: The config value chosen will not override if a pragma is set in a contract.
Interfaces
You can not compile interface source files directly.
Thus, you must place interface files in a directory named interfaces in your contracts_folder e.g. contracts/interfaces/IFace.vy.
Then, these files can be imported in other .vy sources files via:
importinterfaces.IFaceasIFace
Alternatively, use JSON interfaces from dependency contract types by listing them under the import_remapping key:
You can install versions of Vyper using the ape vyper vvm CLI tools.
List installed versions using:
ape vyper vvm list
To list the available Vyper versions, do:
ape vyper vvm list --available
Install more versions using the command:
ape vyper vvm install 0.3.7 0.3.10
Custom Output Format
To customize Vyper’s output format (like the native -f flag), you can configure the output format:
For example, to only get the ABI, do:
[tool.ape.vyper]
output_format = ["abi"]
To do this using the CLI only (adhoc), use the following command:
ape compile --config-override '{"vyper": {"output_format": ["abi"]}}'
Solc JSON Format
ape-vyper supports the socl_json format.
To use this format, configure ape-vyper like:
[tool.ape.vyper]
output_format = ["solc_json"]
Note: Normally, in Vyper, you cannot use solc_json with other formats.
However, ape-vyper handles this by running separately for the solc_json request.
Be sure to use the --force flag when compiling to ensure you get the solc JSON output.
ape compile file_needing_solc_json_format.vy -f
To get a dependency source file in this format, configure and compile the dependency.