Compare commits
No commits in common. "pages" and "master" have entirely different histories.
10 changed files with 12 additions and 679 deletions
|
|
@ -1,74 +0,0 @@
|
||||||
---
|
|
||||||
title: "Automating email aliases using Mailinabox and curl"
|
|
||||||
date: 2021-04-30
|
|
||||||
draft: false
|
|
||||||
---
|
|
||||||
|
|
||||||
With every piece of furniture, website and bowl of instant ramen asking you to register a user account, it's getting hard to remember anything. Furthermore, the companies which so gladly hoard our contact information are having a hard time protecting it, and it seems like the age of password managers and email aliases is upon us.
|
|
||||||
|
|
||||||
One cool thing about email aliases is that they let you know which company is sending most spam your way. As a big fan of self-hosting, I have a Mailinabox and I recently found out about its [REST API](https://mailinabox.email/api-docs.html). In my pursuit for comfyness, I tried to automate the creation of aliases.
|
|
||||||
|
|
||||||
# The credentials
|
|
||||||
|
|
||||||
First of all, all of Mailinabox's API needs admin authentication, either an `api_key` or a `user:password` tuple. Since the former needs the later, I'll just use user and password. Even though `curl` tries its best to hide its command line arguments from terminal history, it's just not good practice to pass `-u admin@mydomain:mypassword` directly. Reading a bit in its manual, I found out that option `-K -` lets you pass config parameters from stdin, like so:
|
|
||||||
|
|
||||||
```$ gpg -qd credentials.gpg | curl -X GET "https://{host}/admin/mail/users?format=<string>"```
|
|
||||||
|
|
||||||
Where `credentials.gpg` is an encrypted file with the text `-u mail:password`. This way, you can safely use this without fear of your credentials leaking into logs or `ps` output. If everything worked just right, you should be able to see a list of all of your mails.
|
|
||||||
|
|
||||||
Note that for GPG to be able to ask for secret key's password interactively, you may need to define `GPG_TTY` in your `.bashrc` or equivalent.
|
|
||||||
|
|
||||||
# Creating the alias
|
|
||||||
|
|
||||||
The command for alias creation would be something like this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ gpg -qd credentials.gpg | curl -X POST "https://{host}/admin/mail/aliases/add" \
|
|
||||||
-d "address=<new-address>" \
|
|
||||||
-d "forward_to=<true-address>"
|
|
||||||
```
|
|
||||||
|
|
||||||
This is fine and usable, if a bit tedious. I like to set my aliases to random strings, so malicious actors cannot deduce the true email address. A simple shell script that creates a random email alias would look something like this:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/bin/sh
|
|
||||||
|
|
||||||
DOMAIN="@<your domain>"
|
|
||||||
HOST="<your host>"
|
|
||||||
NEW_ADDRESS=$(tr -dc a-z0-9 </dev/urandom | head -c 13 ; echo '')
|
|
||||||
TRUE_ADDRESS="<your mail@your domain>"
|
|
||||||
|
|
||||||
gpg -qd credentials.gpg | curl -X POST "https://${HOST}/admin/mail/aliases/add" \
|
|
||||||
-d "update_if_exists=0" \
|
|
||||||
-d "address=${NEW_ADDRESS}@${DOMAIN}" \
|
|
||||||
-d "forwards_to=${TRUE_ADDRESS}" \
|
|
||||||
-K - | grep "alias added" && echo "${NEW_ADDRESS}@${DOMAIN}"
|
|
||||||
```
|
|
||||||
|
|
||||||
We're using 13 alphanumeric characters (all lowercase) to maximize compatibility. We could pipe the new email address into something like `xclip -sel clip` to have it in our clipboard right away. Note that we need to grep curl's stdout to know if the account was actually created, since it will still return 0 on a failed creation.
|
|
||||||
|
|
||||||
# Making it comfier
|
|
||||||
|
|
||||||
Making this script into a keyboard hotkey that does not need a terminal is a pain, since `gpg`'s pinentry depends on `gpg-agent`. Rather than dealing with it, I'd recommend either using a new passwordless private key for the file or manually creating an input dialog box with something like `zenity`.
|
|
||||||
|
|
||||||
This is a script that works as a window-manager-launched keyboard shortcut:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/bin/sh
|
|
||||||
|
|
||||||
DOMAIN="@<your domain>"
|
|
||||||
HOST="<your host>"
|
|
||||||
NEW_ADDRESS=$(tr -dc a-z0-9 </dev/urandom | head -c 13 ; echo '')
|
|
||||||
TRUE_ADDRESS="<your mail@your domain>"
|
|
||||||
|
|
||||||
zenity --password | \
|
|
||||||
gpg --batch --passphrase-fd 0 --pinentry-mode loopback -q -d /full/path/to/credentials.gpg | \
|
|
||||||
curl -s -X POST "https://${HOST}/admin/mail/aliases/add" \
|
|
||||||
-d "update_if_exists=0" \
|
|
||||||
-d "address=${NEW_ADDRESS}@${DOMAIN}" \
|
|
||||||
-d "forwards_to=${TRUE_ADDRESS}" \
|
|
||||||
-K - | grep "alias added" && echo "${NEW_ADDRESS}@${DOMAIN}" | \
|
|
||||||
xclip -sel clip && notify-send "Email created" || notify-send "Email creation failed"
|
|
||||||
```
|
|
||||||
|
|
||||||
Of course, you may dispense with the `notify-send`. I haven't delved into it since you don't always need a password along with the email and most browsers have very decent password generators, but automating [KeePassXC's great CLI tools](https://www.mankier.com/1/keepassxc-cli) (or whatever alternative floats your boat) to generate and add passwords at the same time for these accounts shouldn't be terribly hard.
|
|
||||||
|
|
@ -1,40 +0,0 @@
|
||||||
---
|
|
||||||
title: "Automating tags using Make and GCC"
|
|
||||||
date: 2021-04-29
|
|
||||||
draft: false
|
|
||||||
---
|
|
||||||
I've been messing around with LSP and tag systems lately, trying to make my Emacs setup feel a little more /comfy/. However, try as I might, making LSP servers find all project headers with [ccls](https://github.com/MaskRay/ccls) has proven harder than it has any right to be.
|
|
||||||
|
|
||||||
You see, all LSP solutions that I am aware of have a bitch of a time dealing with non-cmake projects. I shun away from complicated build tools, favoring makefiles and simple shell scripts in my projects, and I really don't feel like switching to CMake just for source indexing. I tried using [Bear](https://github.com/rizsotto/Bear) to generate a `compile_commands.json` file that `ccls` could parse, but it still didn't like my project.
|
|
||||||
|
|
||||||
Then I went back to [GNU Global](https://www.gnu.org/software/global/), the superior tagging solution as far as I researched, which I had [already used](https://github.com/Phireh/ast3roiDS/wiki/Syntax-Highlighting) (or tried to. I didn't really get the difference between syntax highlighting and source indexing back in the day). However, even if `global` is the superior choice for tags, I find the `gtags` generation annoying and difficult. It *really* doesn't want you to generate tags from files outside the root source directory. Doing so requires its own tag generation *for each* include path, and that you declare an environment variable `GTAGSLIBPATH` with a list of tagged directories... not ideal for what I'd like to be an automated step.
|
|
||||||
|
|
||||||
The funny thing is, `gtags` _can_ receive a list of files to be tagged as input, in case you had the path of every file needed by your project. It just _refuses_ to work with it. It will spit something like this to you:
|
|
||||||
|
|
||||||
```Warning: '/usr/include/FLAC/all.h' is out of source tree. ignored.```
|
|
||||||
|
|
||||||
... why?
|
|
||||||
|
|
||||||
# Back to ctags
|
|
||||||
|
|
||||||
The good ol' ctags does not seem to have this strange limitation. The command `ctags -L` accepts a list of filenames to be tagged as input, and it does not discriminate files outside the root source tree.
|
|
||||||
|
|
||||||
Only problem being, is there a way to automate such a process? Actually, it is!
|
|
||||||
|
|
||||||
Some time ago I delved into [GCC's preprocessor flags](https://gcc.gnu.org/onlinedocs/gcc/Preprocessor-Options.html) and options `-M` and `-MM` caught my eye. They ask the compiler for a list of all included files after preprocessing, without actually compiling the code. Only problem is, they're designed for GNU Makefile in mind, so we'd need to edit the output of `GCC -M` before using it.
|
|
||||||
|
|
||||||
Just as I was finishing my `awk` oneliner, I found out that [I am not the first person to have thought of this](https://www.topbug.net/blog/2012/03/17/generate-ctags-files-for-c-slash-c-plus-plus-source-files-and-all-of-their-included-header-files/). Props to you, Hong Xu. Even though I'm not first to this, I discovered a cool programmer blog.
|
|
||||||
|
|
||||||
Still, I find my solution (using a makefile rule) comfier than his script. Here it is, in all its glory:
|
|
||||||
|
|
||||||
```make
|
|
||||||
tags:
|
|
||||||
@$(CC) $(CFLAGS) $(LIBS) $(INCLUDES) -M $(SOURCES) | awk 'NR==1 { for (i=2;i<NF;++i) print $$i } NR>1 { for(i=1;i<NF;++i) print $$i }' | ctags -L -
|
|
||||||
```
|
|
||||||
|
|
||||||
Some things to note:
|
|
||||||
|
|
||||||
1. `$$` tells `make` to escape the `$` character, so it can be passed to `awk`.
|
|
||||||
1. Replace `ctags` with `etags` depending on your editor.
|
|
||||||
1. If you don't have `awk` on hand you could just use Xu's sed commands.
|
|
||||||
1. You can replace `-M` with `-MM` if you don't feel like you need to tag the system's headers.
|
|
||||||
|
|
@ -1,70 +0,0 @@
|
||||||
---
|
|
||||||
title: "Contributing patches to source-based distributions, and the importance of doing so"
|
|
||||||
date: 2021-09-01T21:58:17+02:00
|
|
||||||
draft: true
|
|
||||||
---
|
|
||||||
|
|
||||||
# Motivation
|
|
||||||
|
|
||||||
I've recently been trying to look into GDB's power-user features, in an weak attempt to stop spraying panicked logging all over my programs each time I run into a problem. As it turns out, GDB has plenty of hidden gems and tricks to the brave few who venture into its docs. For example, it includes [a full Python interpreter](https://sourceware.org/gdb/current/onlinedocs/gdb/Python.html#Python) that you can invoke from its CLI. It is often used to write custom pretty-printers, among other things.
|
|
||||||
|
|
||||||
Technically, Python is not a strict dependency of GDB, but rather [an optional one](https://sourceware.org/git/?p=binutils-gdb.git;a=blob;f=gdb/configure;h=f0b1af4a6ea5e303525db8dcea98c45a4ef5b28d;hb=HEAD#l1653) if one chooses to run it with integrated Python support. But since Python is ubiquitous among the majority of GNU/Linux (or GNU+Linux, wink wink) distributions, there is no harm in including it by default. Debian, Arch, Gentoo, Void Linux, etc. all include it by default.
|
|
||||||
|
|
||||||
So far, so good. But I also found out that GDB [has guile support](https://sourceware.org/gdb/current/onlinedocs/gdb/Guile.html#Guile), too. If you're not familiar with it, GNU Guile is GNU's implementation of Scheme, which they also have been steadily adding to many of their programs for things like scripting. I am quite fond of lisp languages, so I decided to try it myself. However, Guile does not enjoy Python's ubiquity: you will most likely encounter this error if you attempt executing guile code from GDB:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ gdb -batch -n -ex "gu (+ 1 2)"
|
|
||||||
Guile scripting is not supported in this copy of GDB.
|
|
||||||
```
|
|
||||||
|
|
||||||
However, a typical installation will execute Python just fine:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ gdb -batch -n -ex "py print(1 + 2)"
|
|
||||||
3
|
|
||||||
```
|
|
||||||
|
|
||||||
Void, Debian, Ubuntu, Gentoo, and I guess that almost all distros would fail here. The only one that includes it by default is Arch Linux. This means, however, that Arch has included guile-2.2 as a dependency of GDB for a quite obscure piece of functionality that most people won't care about. This is the tradeoff that distro mantainers make for us: Arch has opted for the bateries-included approach. Most of its packages pleasantly work for anything you'd need out of the box, at the cost of increased disk bloat and dependencies.
|
|
||||||
|
|
||||||
# Reflecting on source distributions
|
|
||||||
|
|
||||||
If I was using a binary distribution, I'd be out of luck. However, I am a Gentoo citizen so, in exchange for my time, I get to make whichever tradeoffs I want myself. Gentoo packages usually have USE flags, an abstraction over whichever method the program is question uses to configure its compile-time parameters. But, if I check GDB's list of USE flags, I see no such flag:
|
|
||||||
```
|
|
||||||
$ equery -N u gdb | grep guile
|
|
||||||
$
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
...
|
|
||||||
|
|
||||||
I already know from peeking into [GDB's source tree](https://sourceware.org/git/?p=binutils-gdb.git;a=blob;f=gdb/configure;h=f0b1af4a6ea5e303525db8dcea98c45a4ef5b28d;hb=HEAD#l11568) that its compile phase accepts the `--with-guile=<yes/no/auto/version>` argument, I only have to expose it past the .ebuild.
|
|
||||||
|
|
||||||
Looking into [its commit history](https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=7a79de33ce600f92b0c1affc000eb0fd5b65e65f), I can see that guile support had been dropped circa 2015, which is a somewhat long time ago. I check guile's package status:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ eix -l0 guile
|
|
||||||
[I] dev-scheme/guile
|
|
||||||
Available versions:
|
|
||||||
(12)
|
|
||||||
[M] 1.8.8-r3 (12/8)^t [debug debug-freelist debug-malloc +deprecated discouraged emacs networking nls readline +regex +threads] Matt Turner <mattst88@gentoo.org> (2019-09-01) TeXmacs is the only remaining package in tree that requires guile-1.8, which is unsupported upstream. A TeXmacs port to Guile-2 has been in progress for a few years. Bug #436400
|
|
||||||
2.0.14-r4 (12/22) [debug debug-malloc +deprecated +networking +nls +regex +threads] ["regex"]
|
|
||||||
2.2.6 (12/2.2-1)^s [debug debug-malloc +deprecated +networking +nls +regex +threads] ["regex"]
|
|
||||||
2.2.7-r1 (12/2.2-1)^s [debug debug-malloc +deprecated +networking +nls +regex +threads] ["regex"]
|
|
||||||
[M]~ 3.0.7 (12/3.0-1)^s [debug debug-malloc +deprecated +jit +networking +nls +regex +threads] ["regex"] Sam James <sam@gentoo.org> (2020-10-05) Masked for testing. New major versions of Guile often break reverse dependencies. Guile in Gentoo is not slotted, so let's be cautious. bug #705554, bug #689408.
|
|
||||||
Installed versions: 2.2.7-r1(12/2.2-1)^s(17:37:48 15/08/21)(deprecated networking nls regex threads -debug -debug-malloc)
|
|
||||||
Homepage: https://www.gnu.org/software/guile/
|
|
||||||
```
|
|
||||||
|
|
||||||
There are three non-masked versions, which in Gentoo parlance would mean something like "stable". Furthermore, GDB is compatible with versions 2.0, 2.2 and 3.0. It should then be perfectly OK to introduce the flag again.
|
|
||||||
|
|
||||||
# Preparing the grounds
|
|
||||||
|
|
||||||
Creating a custom, local ebuild for GDB that lets me compile it differently from upstream Gentoo and never letting anyone know is perfectly acceptable and good. However, it's good practice to share such improvements with the community. The method for contributing package ebuilds to the project has changed over the years, moving from its bugzilla to [GitHub's pull requests](https://github.com/gentoo/gentoo/pulls) on the Gentoo mirror repository. They don't get merged using GH, but rather the devs manually apply accepted commit patches to the underlying Gentoo git, much like projects based around mail lists. Even if it is just a mirror of their own architecture, I can't say I feel too happy about this ad-hoc manouver. Oh well. [Submitting git patches via bugzilla](https://bugs.gentoo.org/enter_bug.cgi?product=Gentoo%20Linux) also seems like it's still an option.
|
|
||||||
|
|
||||||
For reasons of licensing, these commits need to be signed, both in the private-key and the real-name way. Gentoo makes it sound needlessly ominous, calling it GLEP 76's [Certificate of Origin](https://www.gentoo.org/glep/glep-0076.html#certificate-of-origin). In practical terms, it just means that commit messages we intent to merge into upstream should include this footer:
|
|
||||||
|
|
||||||
`Signed-off-by: Full name <e-mail>`
|
|
||||||
|
|
||||||
Where you are actually supposed to actually write the <> brackets. For working programmers that can't hand over the copyright of their code or folks that don't want to share their identitied, it would be better to simply poke other people into writing these patches themselves via mail list or irc.
|
|
||||||
|
|
||||||
I, however, am a magnanimous G~~od~~entoo user. If I am to submit a pull request, the first thing would be cloning Gentoo's github repository, containing the build instructions for all standard packages. Following "Repository Settings" section [in this user guide](https://wiki.gentoo.org/wiki/Gentoo_git_workflow#Repository_settings) you get a repo ready for experimentation.
|
|
||||||
|
|
@ -1,341 +0,0 @@
|
||||||
---
|
|
||||||
title: "Git-powered runtime code injection"
|
|
||||||
date: 2021-09-16T10:09:35+02:00
|
|
||||||
draft: false
|
|
||||||
---
|
|
||||||
|
|
||||||
# Motivation
|
|
||||||
|
|
||||||
I've been following the work with [Handmade Hero](https://handmadehero.org/) and the [Handmade Network](https://handmade.network/) for a while. It's hard to keep up with the hundreds of episodes of the original show, but the first 30 or so were an instant classic as far as dealing with the irks and quirks of platform layer code in videogames, and why would you even want to do so.
|
|
||||||
|
|
||||||
One of the great features that came out of the platform layer in was runtime-code injection, around [episode 021](https://guide.handmadehero.org/code/day021/). It is surprisingly easy to replicate, provided that you architect your program just right. The program basically checks if you have re-compiled it and hot-loads the new code on runtime without any delays.
|
|
||||||
|
|
||||||
HMH only checks for _new_ code, though. I figured out it would be a cool proof of concept to write a program that could go _backwards_, too. Using git and [its C API](https://libgit2.org/) it could load any commit, older or newer than the current runtime, and patch the code in using a similar method to HMH. Instead of being something entirely on the backend, this would actually be a fully-fledged menu inside the program that showed all commits.
|
|
||||||
|
|
||||||
So, how did I do it?
|
|
||||||
|
|
||||||
# Platform-dependent vs platform-independent code
|
|
||||||
|
|
||||||
There are three basic ways to separate our platform-dependent code, although only one gives us the cool property of being able to hot-reload code. These are the following:
|
|
||||||
|
|
||||||
1. `#ifdef`s:
|
|
||||||
A lot of older cross-platform code is chock-full of these. It's the most ad-hoc, "just works" type of cross-platform code, and reads something like this:
|
|
||||||
|
|
||||||
```c
|
|
||||||
// In the middle of normal-looking code
|
|
||||||
#ifdef __WIN32
|
|
||||||
LoadLibraryA(libname);
|
|
||||||
#elif __linux__
|
|
||||||
dlopen(libname, RTLD_NOW);
|
|
||||||
#endif
|
|
||||||
// Back to the platform indepent code ...
|
|
||||||
```
|
|
||||||
It doesn't take a genius to see that this gets hard to manage if you use these `ifdef` statements too much. In particular, debugging can become a pain, since the preprocessor is cutting away a lot of lines of unneeded source.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
2. Abstract away the platform layer:
|
|
||||||
A saner, more popular way to deal with platform boundaries is abstracting the services that it provides, and calling our own platform-independent encapsulation. It would look something like this:
|
|
||||||
```c
|
|
||||||
// main.c
|
|
||||||
int main()
|
|
||||||
{
|
|
||||||
InitWindow(width, height, "Title");
|
|
||||||
SetTargetFPS(60);
|
|
||||||
|
|
||||||
while(!WindowShouldClose())
|
|
||||||
{
|
|
||||||
// More platform-independent code here
|
|
||||||
}
|
|
||||||
CloseWindow();
|
|
||||||
}
|
|
||||||
```
|
|
||||||
```c
|
|
||||||
// common.h
|
|
||||||
// These are only the signatures. Implementation goes in windows_code.c / linux_code.c
|
|
||||||
void InitWindow(int height, int height, char *title);
|
|
||||||
void SetTargetFPS(int target);
|
|
||||||
bool WindowShouldClose(void);
|
|
||||||
void CloseWindow(void);
|
|
||||||
```
|
|
||||||
|
|
||||||
```Makefile
|
|
||||||
# Makefile
|
|
||||||
SRC=main.c common.h
|
|
||||||
ifeq ($(OS),Windows_NT)
|
|
||||||
SRC+=windows_code.c
|
|
||||||
else # presuppose Linux
|
|
||||||
SRC+=linux_code.c
|
|
||||||
endif
|
|
||||||
|
|
||||||
main: $(SRC)
|
|
||||||
gcc $(SRC) -o main
|
|
||||||
```
|
|
||||||
|
|
||||||
Each would-be platform-dependent function is abstracted away in common.h, and its implementation is conditionally compiled with either Linux or Windows-specific code. This is perfectly fine way to architect your code. It has, however, one small problem: what would happen if we had to do things in different order in another operating system? Like, for example, a platform where we need to set the framerate _before_ we create the window.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
3. Abstract away the game: This is the method used in Handmade Hero. Instead of our game calling the operative system when it needs a service, it is the platform-dependent code that serves as the main entrypoint of the program. Only after it has prepared everything needed for our program to run, does it call our game code. It looks like this:
|
|
||||||
|
|
||||||
```c
|
|
||||||
// linux.c
|
|
||||||
int main()
|
|
||||||
{
|
|
||||||
XOpenDisplay(NULL);
|
|
||||||
// some more linux-specific code...
|
|
||||||
XCreateWindow(/* params */);
|
|
||||||
|
|
||||||
game_state_t game_state;
|
|
||||||
bool running;
|
|
||||||
while (running)
|
|
||||||
{
|
|
||||||
waitforvsync();
|
|
||||||
running = record_input(&game_state);
|
|
||||||
game_update_and_render(&game_state); // <- the only platform-independent function we call
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```c
|
|
||||||
// common.h
|
|
||||||
// both sides need to know about this structure to work along
|
|
||||||
typedef struct {
|
|
||||||
// struct definition
|
|
||||||
} game_state_t;
|
|
||||||
|
|
||||||
void game_update_and_render(game_state_t *game_state);
|
|
||||||
```
|
|
||||||
|
|
||||||
```c
|
|
||||||
// game.c
|
|
||||||
void game_update_and_render(game_state_t *game_state)
|
|
||||||
{
|
|
||||||
// the actual game code goes here
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
```Makefile
|
|
||||||
# Makefile
|
|
||||||
SRC=game.c common.h
|
|
||||||
ifeq ($(OS),Windows_NT)
|
|
||||||
SRC+=windows.c
|
|
||||||
else # presuppose Linux
|
|
||||||
SRC+=linux.c
|
|
||||||
endif
|
|
||||||
|
|
||||||
main: $(SRC)
|
|
||||||
gcc $(SRC) -o main
|
|
||||||
```
|
|
||||||
|
|
||||||
Notice that the only place of interaction between both layers is a single function. The platform-side code has total control over things like input, window creation, timing, etc. And the game itself becomes much simpler, only concerning itself with the details inside the game state it can see. Its only job becomes advancing the game to its next state and return control to the platform layer.
|
|
||||||
|
|
||||||
But not only that, this architecture allows some more tricks, like dynamically linking game code.
|
|
||||||
|
|
||||||
# Game as a library
|
|
||||||
|
|
||||||
Since we have abstracted our game code, compiling and linking it as a dynamic library is straightforward, we just need two separate [compilation units](https://www.cs.auckland.ac.nz/references/unix/digital/AQTLTBTE/DOCU_015.HTM):
|
|
||||||
|
|
||||||
```Makefile
|
|
||||||
# ...
|
|
||||||
CFLAGS=-fPIC # you will need this to be loaded as a library
|
|
||||||
|
|
||||||
libgame.so: game.c
|
|
||||||
gcc $(CFLAGS) game.c -o libgame.so
|
|
||||||
|
|
||||||
main: linux.c libgame.so
|
|
||||||
gcc linux.c -o main
|
|
||||||
```
|
|
||||||
|
|
||||||
If we wanted, we could add `-lgame` to `main`'s target and have the linker automagically resolve `game_update_and_render` for us. That would be what is called _early binding_, having the program already know the location of a function before it actually executes. But since we want the ability to change it on runtime, we're interested in _late binding_ here.
|
|
||||||
|
|
||||||
The only real problem here is naming: we have to resort to preprocessor trickery, but it works:
|
|
||||||
|
|
||||||
```c
|
|
||||||
// common.h
|
|
||||||
#define GAME_UPDATE(funcname) void funcname(game_state_t *g)
|
|
||||||
typedef GAME_UPDATE(game_update_f);
|
|
||||||
|
|
||||||
// We save the game code as function pointers so we can transparently change it
|
|
||||||
|
|
||||||
typedef struct {
|
|
||||||
game_update_f *game_update;
|
|
||||||
} game_code_t;
|
|
||||||
```
|
|
||||||
|
|
||||||
```c
|
|
||||||
// game.c
|
|
||||||
// This macro just expands to "void game_update(game_state *g)"
|
|
||||||
GAME_UPDATE(game_update)
|
|
||||||
{
|
|
||||||
// The actual game code
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
You can have as many or as few entrypoints to the game code as you desire. For example, having render and update in the same of different functions are two totally valid approaches. You just have to make sure to correctly link all symbols when loading:
|
|
||||||
|
|
||||||
|
|
||||||
```c
|
|
||||||
// linux.c
|
|
||||||
// ...
|
|
||||||
void *library_handle = dlopen("libgame.so", RTLD_NOW);
|
|
||||||
game_code_t game_code;
|
|
||||||
game_code.game_update = dlsym(library_handle, "game_update");
|
|
||||||
// ...
|
|
||||||
while(running)
|
|
||||||
{
|
|
||||||
// ...
|
|
||||||
game_code.update(game_state);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Linux and Windows systems have different extensions and methods for linking, but the essence remains the same. With this setup, runtime-reloading of code is pretty straightforward. The way they do it in Handmade Hero is checking the timestamp of the game library, and reloading the symbols if it changes. Essentially, they reload the game code if you recompile the game while running.
|
|
||||||
|
|
||||||
This little trick has its limitations:
|
|
||||||
|
|
||||||
1. Reloading platform code is not possible for obvious reasons, so it won't help while developing the platform layer (which isn't _that much_ of a problem, since it's a small percentage of the actual work).
|
|
||||||
2. You shouldn't save intermediate game state in the game layer, things like `static` keywords inside game functions are a no-no while recompiling and relinking code. Saving everything in its designated memory takes some discipline.
|
|
||||||
3. There are no safety rails. For example, changing the structure of the `game_code_t` is an incompatible change with previous versions of the program, and will likely crash the program if you change it on runtime. However, it is also an excellent way to test for backwards and forwards compatibility.
|
|
||||||
|
|
||||||
# A git-aware game
|
|
||||||
|
|
||||||
This trick is pretty powerful by itself, but I figured combining it with version control would make it even ever. We'll be using `libgit2` to retrieve commit information from the folder that the game lives in, and presenting it inside an ncurses menu (granted, ncurses is not a very platform-independent way of rendering, but it's simple enough for a proof of concept).
|
|
||||||
|
|
||||||
Libgit2's development is actually sort-of independent of the `git` CLI, so it's not exactly a 100% replacement, and requires a bit of work to get things going. Since it's something that both the platform and game layers need to see (for loading and rendering the menu respectively) I added it to the `game_state`:
|
|
||||||
|
|
||||||
```c
|
|
||||||
struct commit_node_t;
|
|
||||||
typedef struct commit_node_t commit_node_t;
|
|
||||||
|
|
||||||
struct commit_node_t {
|
|
||||||
commit_node_t *next;
|
|
||||||
char summary[32];
|
|
||||||
char author[32];
|
|
||||||
char email[32];
|
|
||||||
char date_as_string[32];
|
|
||||||
git_oid oid; // SHA-1 hash of GIT_OID_RAWSZ (20) Bytes
|
|
||||||
git_oid tree_oid; // the hash of the tree referenced by this particular commit
|
|
||||||
char oid_as_string[41]; // 20 oid Bytes * 2 chars to represent each byte + terminating \0
|
|
||||||
};
|
|
||||||
|
|
||||||
typedef struct {
|
|
||||||
// ...
|
|
||||||
commit_node_t *commit_list;
|
|
||||||
git_oid platform_oid; // commit the platform layer is based on
|
|
||||||
git_oid game_oid; // commit of the currently loaded game layer
|
|
||||||
git_oid selected_oid; // commit we just selected on the menu
|
|
||||||
// ...
|
|
||||||
} game_state_t;
|
|
||||||
```
|
|
||||||
I decided to represent them as an intrusive list, but that's an arbitrary decision.
|
|
||||||
|
|
||||||
Initially populating this list is done by a `revwalker`:
|
|
||||||
|
|
||||||
```c
|
|
||||||
int read_git_repo(git_repository *repo, commit_node_t **list_head)
|
|
||||||
{
|
|
||||||
git_revwalk *walker = NULL;
|
|
||||||
git_commit *commit = NULL;
|
|
||||||
|
|
||||||
// We use a revwalker starting from HEAD to retrieve commits one by one
|
|
||||||
git_revwalk_new(&walker, repo);
|
|
||||||
git_revwalk_sorting(walker, GIT_SORT_TOPOLOGICAL);
|
|
||||||
git_revwalk_push_head(walker);
|
|
||||||
|
|
||||||
while (git_revwalk_next(&oid, walker) != GIT_ITEROVER)
|
|
||||||
{
|
|
||||||
// populate list with commit data
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Once a commit is selected via menu, we just need to extract its files for compilation. A quick and dirty way to do so is creating a temporal folder with `mkdtemp`:
|
|
||||||
|
|
||||||
```c
|
|
||||||
char *dump_git_tree(char *dirname, git_oid oid, git_repository *repo)
|
|
||||||
{
|
|
||||||
git_tree *tree;
|
|
||||||
git_tree_lookup(&tree, repo, &oid);
|
|
||||||
|
|
||||||
mkdtemp(tempdirname);
|
|
||||||
int n = git_tree_entrycount(tree);
|
|
||||||
for (int i = 0; i < n; ++i)
|
|
||||||
{
|
|
||||||
const git_tree_entry *entry;
|
|
||||||
git_object *object;
|
|
||||||
|
|
||||||
entry = git_tree_entry_byindex(tree, i);
|
|
||||||
if (git_object_type(object) == GIT_OBJECT_BLOB)
|
|
||||||
{
|
|
||||||
git_blob *blob = (git_blob *) object;
|
|
||||||
// Construct the filename of the new file: tempdir/filename
|
|
||||||
char filepath[PATH_MAX];
|
|
||||||
strcpy(filepath, dirname);
|
|
||||||
strcat(filepath, "/");
|
|
||||||
strcat(filepath, git_tree_entry_name(entry));
|
|
||||||
|
|
||||||
// Actually write the file contents to the temp folder
|
|
||||||
FILE *fp = fopen(filepath, "w");
|
|
||||||
fwrite(git_blob_rawcontent(blob), (size_t)git_blob_rawsize(blob), 1, fp);
|
|
||||||
fclose(fp);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return tempdirname;
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Then, we compile and load the extracted code. One way to do it is forking our process and waiting for the compilation to finish:
|
|
||||||
```c
|
|
||||||
char *tempdir = dump_git_tree("tempXXXXXX", game_state.selected_oid, game_state.repo);
|
|
||||||
int pid = fork();
|
|
||||||
|
|
||||||
if (!pid) // we are the child process
|
|
||||||
{
|
|
||||||
char command[PATH_MAX];
|
|
||||||
strcpy(command, "--directory=");
|
|
||||||
strcat(command, "./");
|
|
||||||
strcat(command, tempdir);
|
|
||||||
|
|
||||||
// Redirect output to /dev/null to avoid messing the screen
|
|
||||||
int fd = open("/dev/null", O_WRONLY);
|
|
||||||
dup2(fd, 1);
|
|
||||||
dup2(fd, 2);
|
|
||||||
|
|
||||||
execl("/usr/bin/make", "/usr/bin/make", "-s", command, "libgame.so", (char*) NULL);
|
|
||||||
}
|
|
||||||
// Wait for compilation to finish
|
|
||||||
wait(0);
|
|
||||||
|
|
||||||
// ...
|
|
||||||
|
|
||||||
// Actually do the code injection
|
|
||||||
char libpath[PATH_MAX] = "./";
|
|
||||||
strcat(libpath, tempdir);
|
|
||||||
strcat(libpath, "/game.so");
|
|
||||||
|
|
||||||
load_functions(&game_code, libpath);
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
And just like that, our `game_code` struct has been updated with new code. The finished product looks something like this:
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
I pushed a [GitHub repo](https://github.com/Phireh/runtime-git) with the complete proof of concept of this simple idea. You can edit anything in `game.so`, commit the changes, and play a bit changing the game code in real time. Right now it has a simple implementation of [Conway's game of life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) rendered in ncurses.
|
|
||||||
|
|
||||||
# Future work
|
|
||||||
|
|
||||||
Libgit is a bit verbose and there is quite a bit of error handling code, so I just posted this as soon as I had a minimal runnable example. There's a ton of sugar you can add to this cake, though:
|
|
||||||
|
|
||||||
- ~~Add C++ support. C++ implemented overloading via name mangling, so we'd need to ask for the mangled name in our `dlsym` calls.~~ Edit: C++'s name mangling is platform-specific, so it's better to simply use `extern "C"` to avoid dealing with it. We're only loading a few non-overloaded symbols, anyhow.
|
|
||||||
- Auto-detect new commits _while the game is running_.
|
|
||||||
- Support for git submodules and more compilation units for different subsystems could be a boon for projects with big compile times.
|
|
||||||
- Dealing with git delta representations of tree objects. Maybe test it with git lfs too.
|
|
||||||
- Parsing git tags and other metadata.
|
|
||||||
- Check compatibility via version info, either of symbols (using GCC extensions) or of commits, using git tags.
|
|
||||||
- Git fetch/pull from the own game for hot-patching.
|
|
||||||
- It may be possible to do automated testing of compatible versions, giving the program the ability to recover from version-change-induced crashes. Using things like a signal catcher to capture things like `SIGSEGV` and restoring the `game_state` and `game_code` structures could work.
|
|
||||||
- Obviously, this is not only limited to games. Any simulation software, or really any state-machine-like software, would be a good experiment for this.
|
|
||||||
|
|
||||||
# Disclaimer
|
|
||||||
Please remember that the snippets I post are a bit simplified, there's error handling and resource freeing that I left out for the sake of clarity.
|
|
||||||
|
|
@ -1,149 +0,0 @@
|
||||||
---
|
|
||||||
title: "Mapping Fail2ban's list of malicious IPv4 scanners"
|
|
||||||
date: 2021-08-30T23:47:11+02:00
|
|
||||||
draft: false
|
|
||||||
---
|
|
||||||
|
|
||||||
# Some context
|
|
||||||
|
|
||||||
Back when I built [my gitea server](https://git.roboces.dev/) for the first time, I noticed something strange: it would work nicely, but only for so many hours at a time. Soon enough, it would just crash or stop responding without an apparent reason, leaving me scratching my head.
|
|
||||||
|
|
||||||
I had opened sshd's well-known port to the Internet with the naive impression that having my server with no valid ssh login would be more than enough protection. What could possibly happen? Someone stealing my laptop and using my ssh key to push into my random dark-web repo to assert dominance?
|
|
||||||
|
|
||||||
Oh boy, was I wrong. Not even days after first exposing the ssh port to the Internet, the sheer amount of malicious traffic would make my server crash. The chinese botnets didn't care know nor care about sshd not accepting logins: they just kept trying to brute force in. It is well-known that there is a gigantic amount of IPv4 scanning going on, in the order of thousands of packages per day, but that is a mere fraction of what you can get by showing a well-known port. Before installing [fail2ban](https://www.fail2ban.org), my server was receiving hundreds of login attempts per second.
|
|
||||||
|
|
||||||
# Meeting the scanners
|
|
||||||
|
|
||||||
Fail2ban's method for keeping law and order is fairly straightforward: you give it a # of failed tries, an amount of time to be banned, and it adds temporal `iptables` rules when someone has tried and failed to connect one too many times. I will be using its daily log to get a better grasp of where all the botting is coming from. Fail2ban's logfiles look something like this:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ head /var/log/fail2ban.log
|
|
||||||
2021-08-29 00:00:34,222 fail2ban.server [94]: INFO rollover performed on /var/log/fail2ban.log
|
|
||||||
2021-08-29 00:01:21,092 fail2ban.actions [94]: NOTICE [sshd] Unban 186.3.164.76
|
|
||||||
2021-08-29 00:03:05,205 fail2ban.actions [94]: NOTICE [sshd] Unban 222.186.30.112
|
|
||||||
2021-08-29 00:09:41,049 fail2ban.filter [94]: INFO [sshd] Found 221.181.185.159 - 2021-08-29 00:09:40
|
|
||||||
2021-08-29 00:09:42,651 fail2ban.filter [94]: INFO [sshd] Found 221.181.185.159 - 2021-08-29 00:09:42
|
|
||||||
2021-08-29 00:09:45,665 fail2ban.filter [94]: INFO [sshd] Found 221.181.185.159 - 2021-08-29 00:09:45
|
|
||||||
2021-08-29 00:09:48,369 fail2ban.filter [94]: INFO [sshd] Found 221.181.185.159 - 2021-08-29 00:09:48
|
|
||||||
2021-08-29 00:09:51,574 fail2ban.filter [94]: INFO [sshd] Found 221.181.185.159 - 2021-08-29 00:09:51
|
|
||||||
2021-08-29 00:09:51,638 fail2ban.actions [94]: NOTICE [sshd] Ban 221.181.185.159
|
|
||||||
2021-08-29 00:09:53,229 fail2ban.filter [94]: INFO [sshd] Found 221.181.185.159 - 2021-08-29 00:09:53
|
|
||||||
```
|
|
||||||
|
|
||||||
The only data I am interested in is the IP addresses (and the quantity of them), so we trim the file accordingly, taking care to remove duplicates:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ grep -E "\WBan" /var/log/fail2ban.log | awk '{ print $8 }' | sort --unique | tee banlog
|
|
||||||
1.116.211.170
|
|
||||||
1.117.214.250
|
|
||||||
1.15.106.44
|
|
||||||
1.15.151.58
|
|
||||||
1.15.183.51
|
|
||||||
1.15.21.246
|
|
||||||
1.179.137.10
|
|
||||||
1.226.12.132
|
|
||||||
1.53.89.181
|
|
||||||
1.85.216.176
|
|
||||||
[...]
|
|
||||||
```
|
|
||||||
|
|
||||||
Take care to use `sort --unique` instead of something like `uniq`, which only detects adjacent duplicates.
|
|
||||||
|
|
||||||
# Scanning the scanners
|
|
||||||
|
|
||||||
Now having their IPs, we can get a rough estimation of where the traffic is coming from. There are many online services you can use to get this data, but they won't let you do queries in bulk without charging you for some kind of database suscription. If someone knows a program that _just works_ with batteries included, please tell me.
|
|
||||||
|
|
||||||
Anyhow, I ended up using [IP2Location's](https://www.ip2location.com) BIN-format database along with its [Python API](https://www.ip2location.com/development-libraries/ip2location/python). They require a free account to download their database files, but a burner email or [an alias]( {{< ref "/automating-aliases.md" >}}) will do just fine.
|
|
||||||
|
|
||||||
IP2Location's module can be installed in the usual fashion:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ pip install IP2Location --user
|
|
||||||
```
|
|
||||||
|
|
||||||
After which we can get our hands on deck. I'm not much of a pythoner myself, so I decided to make a simple .py that outputs formatted lines so I can keep using my shiny UNIX tools:
|
|
||||||
|
|
||||||
|
|
||||||
```python
|
|
||||||
#!/usr/bin/env python
|
|
||||||
import sys, IP2Location
|
|
||||||
|
|
||||||
def main():
|
|
||||||
|
|
||||||
# Argument checking
|
|
||||||
if (len(sys.argv) < 3):
|
|
||||||
print("Usage: ip_query.py <database_file> <ips_file>")
|
|
||||||
return
|
|
||||||
|
|
||||||
# Get a list of ips as trimmed strings
|
|
||||||
with open(sys.argv[1], "r") as ips_file:
|
|
||||||
ip_list = [line.rstrip() for line in ips_file]
|
|
||||||
|
|
||||||
# Open connection to binary database
|
|
||||||
database = IP2Location.IP2Location(sys.argv[2], "SHARED_MEMORY")
|
|
||||||
|
|
||||||
# field delimiter
|
|
||||||
d = "~"
|
|
||||||
|
|
||||||
for ip in ip_list:
|
|
||||||
record = database.get_all(ip)
|
|
||||||
print(record.ip + d +
|
|
||||||
record.country_short + d +
|
|
||||||
record.country_long + d +
|
|
||||||
record.region + d +
|
|
||||||
record.city + d +
|
|
||||||
record.latitude + d +
|
|
||||||
record.longitude + d +
|
|
||||||
record.zipcode + d +
|
|
||||||
record.timezone)
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
main()
|
|
||||||
```
|
|
||||||
|
|
||||||
Depending on which database you chose, it may have more or less fields available. Later I will cut what I don't need, but for now I'm dumping everything. You can use whichever delimiter you want, but I don't recommend using `,` or any other that could be included in a country's name or timezone info.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ chmod +x ip_query.py
|
|
||||||
$ ./ip_query.py banlog IP2LOCATION-LITE-DB11.BIN | tee ipstats
|
|
||||||
1.116.211.170~CN~China~Beijing~Beijing~39.907501~116.397232~100006~+08:00
|
|
||||||
1.117.214.250~CN~China~Beijing~Beijing~39.907501~116.397232~100006~+08:00
|
|
||||||
1.15.106.44~CN~China~Beijing~Beijing~39.907501~116.397232~100006~+08:00
|
|
||||||
1.15.151.58~CN~China~Beijing~Beijing~39.907501~116.397232~100006~+08:00
|
|
||||||
1.15.183.51~CN~China~Beijing~Beijing~39.907501~116.397232~100006~+08:00
|
|
||||||
1.15.21.246~CN~China~Beijing~Beijing~39.907501~116.397232~100006~+08:00
|
|
||||||
1.179.137.10~TH~Thailand~Krung Thep Maha Nakhon~Bangkok~13.750000~100.516670~10200~+07:00
|
|
||||||
[...]
|
|
||||||
```
|
|
||||||
|
|
||||||
Now, this is looking much better. I was curious about which countries were the biggest culprits, although it isn't much of a surprise:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ cut -d'~' -f2,3 ipstats | sort | uniq -c | sort -r | head
|
|
||||||
239 CN~China
|
|
||||||
118 US~United States of America
|
|
||||||
45 IN~India
|
|
||||||
33 ID~Indonesia
|
|
||||||
26 VN~Viet Nam
|
|
||||||
25 NL~Netherlands
|
|
||||||
23 SG~Singapore
|
|
||||||
22 KR~Korea (Republic of)
|
|
||||||
22 DE~Germany
|
|
||||||
21 RU~Russian Federation
|
|
||||||
```
|
|
||||||
|
|
||||||
We just made Our Very Own Top 10 Of Shame! And remember that is is _just one day's worth of logs_, from a server that barely half-a-dozen people use, and not even taking into account repeated offenses from the same IP. Goes to show you how crazy IPv4 scanning has gotten.
|
|
||||||
|
|
||||||
# Mapping the scanners
|
|
||||||
|
|
||||||
To top it off, I would like to have some sort of graphical visualization of these heinous crimes. There are some great libraries out there to plot coordinate data into a mapamundi. I would consider [something like folium](https://georgetsilva.github.io/posts/mapping-points-with-folium/) if I were to do more with the Python side of this blogpost. But that's not what we're here for today. Today we're using crappy sites and copy-pasting.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ cut -d'~' -f6,7 --output-delimiter=',' ipstats | xclip -selection clipboard
|
|
||||||
```
|
|
||||||
|
|
||||||
Will do just what we want. `--output-delimiter` is a cool flag that will substitute whatever your delimiter is with a different one. Most places that let you paste coordinates in bulk require comma-separated lines, and that is what we just copied to our clipboard.
|
|
||||||
|
|
||||||
We can use a place like [mapcustomizer](https://www.mapcustomizer.com) for our very first, crappy data visualizing:
|
|
||||||
|
|
||||||

|
|
||||||
6
content/posts/test.es.md
Normal file
6
content/posts/test.es.md
Normal file
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
title: "Random post"
|
||||||
|
date: 2021-03-16
|
||||||
|
draft: false
|
||||||
|
---
|
||||||
|
This is a random test
|
||||||
6
content/posts/test.md
Normal file
6
content/posts/test.md
Normal file
|
|
@ -0,0 +1,6 @@
|
||||||
|
---
|
||||||
|
title: "Random post"
|
||||||
|
date: 2021-03-16
|
||||||
|
draft: false
|
||||||
|
---
|
||||||
|
# This is a random test
|
||||||
|
|
@ -1,5 +0,0 @@
|
||||||
{{- .Scratch.Set "path" (.Get 0) -}}
|
|
||||||
{{- if hasPrefix (.Scratch.Get "path") "/" -}}
|
|
||||||
{{- .Scratch.Set "path" (slicestr (.Scratch.Get "path") 1) -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- .Scratch.Get "path" | absURL -}}
|
|
||||||
Binary file not shown.
|
Before Width: | Height: | Size: 99 KiB |
Binary file not shown.
|
Before Width: | Height: | Size: 287 KiB |
Loading…
Add table
Add a link
Reference in a new issue