Compare commits

...

171 commits

Author SHA1 Message Date
8dbbdcb80a git checkout --theirs relay/views.py 2025-04-24 08:34:32 +08:00
Izalia Mae
b8e0641733 Merge branch 'cache' into 'master'
caching

See merge request pleroma/relay!54
2024-02-05 18:34:43 +00:00
Izalia Mae
bec5d5f207 use gunicorn to start the server 2024-02-05 13:15:08 -05:00
Izalia Mae
02ac1fa53b make sure db connection for request is open 2024-02-04 05:17:51 -05:00
Izalia Mae
2fcaea85ae create a new database connection for each request 2024-02-04 04:53:39 -05:00
Izalia Mae
e6f30ddf64 update tinysql to 0.2.4 2024-02-04 04:41:04 -05:00
Izalia Mae
64690a5c05 create new Database object for SqlCache 2024-02-04 04:40:51 -05:00
Izalia Mae
46413be2af make sure Item.updated is a datetime object if it isn't one already 2024-02-03 05:40:57 -05:00
Izalia Mae
3d81e5ef68 pass instance row to HttpClient.post 2024-02-01 21:40:27 -05:00
Izalia Mae
1668d96485 add setup questions for redis 2024-02-01 21:37:46 -05:00
Izalia Mae
4c4dd3566b cache fixes
* make sure Item.updated is a datetime object
* remove id column when creating Item objects in SqlCache
2024-02-01 11:43:17 -05:00
Izalia Mae
2d641ea183 add database and redis caching 2024-01-31 21:23:45 -05:00
Izalia Mae
f2baf7f9f9 update tinysql to 0.2.3 (fixes postgres support) 2024-01-30 07:34:49 -05:00
Izalia Mae
116a04ce4d allow empty password for database setup 2024-01-29 19:55:02 -05:00
Izalia Mae
6ab6343ae7 fix ImportError on python 3.8 2024-01-29 04:26:54 -05:00
Izalia Mae
81215a83a4 Merge branch 'sql' into 'master'
switch database backend to sql

See merge request pleroma/relay!53
2024-01-27 06:08:59 +00:00
Izalia Mae
ed9d423ca3 update tinysql and set min/max db connections per thread 2024-01-27 00:59:08 -05:00
Izalia Mae
b59ead5d05 fix handle_follow and handle_undo 2024-01-24 19:55:11 -05:00
Izalia Mae
b8aae4c1bb use correct url when fetching inbox data 2024-01-24 19:28:15 -05:00
Izalia Mae
85a4797e68 make sure bool(Response) returns True 2024-01-24 19:24:27 -05:00
Izalia Mae
c2aa8c48bb ignore thread warnings in the sqlite backend for now 2024-01-24 19:20:27 -05:00
Izalia Mae
815053c06f fix the convert command 2024-01-24 01:20:23 -05:00
Izalia Mae
e66be009a6 use the right name for the domain_bans table 2024-01-24 01:20:00 -05:00
Izalia Mae
09e7a8f404 update docs for new commands and config file 2024-01-24 00:48:15 -05:00
Izalia Mae
fc8738afab update relay.service file to use run command 2024-01-23 22:04:07 -05:00
Izalia Mae
cdb10547ec remove extra whitespace in relay.nginx 2024-01-23 22:03:44 -05:00
Izalia Mae
7a9d346642 fix linter warnings 2024-01-23 21:54:58 -05:00
Izalia Mae
485d1cd23e add plugins to pylint 2024-01-23 21:54:05 -05:00
Izalia Mae
35b3fae185 move dev requirements to dev-requirements.txt and only use flake8 for checking unused imports 2024-01-23 21:51:17 -05:00
Izalia Mae
57d7d25743 set sqlite file path relative to config path if possible 2024-01-22 06:50:31 -05:00
Izalia Mae
9cc79aa79a actually fix python packaging this time 2024-01-22 06:39:12 -05:00
Izalia Mae
8806348f95 fix python packaging 2024-01-22 06:30:28 -05:00
Izalia Mae
5f6aef1871 use postgresql/sqlite for database backend 2024-01-22 05:32:16 -05:00
Izalia Mae
9808674b98 logging: use LogLevel enum and add functions to set/get the current level 2024-01-21 06:39:49 -05:00
Izalia Mae
965ac73c6d Merge branch 'annotations' into 'master'
Add annotations and linters

See merge request pleroma/relay!52
2024-01-20 07:49:35 +00:00
Izalia Mae
3d9ba68bd1 use Message.object_id instead of Message.objectid 2024-01-16 00:33:41 -05:00
Izalia Mae
2ebb295be1 handle TypeError in Message.object_id 2024-01-16 00:33:05 -05:00
Izalia Mae
b9eb67b32d version bump 2024-01-15 23:47:23 -05:00
Izalia Mae
90a1a1e0e9 remove hidden imports 2024-01-15 23:44:54 -05:00
Izalia Mae
e6f2174ad4 don't use lazy import for aputils 2024-01-15 23:39:50 -05:00
Izalia Mae
9bf45a54d1 add annotations and fix linter warnings 2024-01-14 14:13:06 -05:00
Izalia Mae
fdef2f708c add settings for pylint and flake8 2024-01-10 10:55:37 -05:00
Izalia Mae
3005e9b370 use format strings for logging 2024-01-10 10:49:43 -05:00
Izalia Mae
dcbde6d532 rework logger to not monkey-patch the logging module 2024-01-10 10:49:05 -05:00
Izalia Mae
2c620a0d84 set minimum python version to 3.8 2024-01-09 23:43:55 -05:00
Izalia Mae
8c6ee7d57a add pylint and flake8 to dev deps 2024-01-09 23:30:43 -05:00
Izalia Mae
4feaccaa53 use View class and make Message a subclass of aputils.message.Message 2024-01-09 23:15:04 -05:00
Izalia Mae
9f3e84f9e5 ignore database file 2024-01-09 23:06:52 -05:00
Izalia Mae
d6ba242d3b update aputils to 0.1.6a 2024-01-09 23:06:23 -05:00
Izalia Mae
eea7dc81ea Merge branch 'upgrade-aputils' into 'master'
Bump aputils version

Closes #38

See merge request pleroma/relay!51
2023-07-15 14:12:56 +00:00
Dmytro Poltavchenko
df231e3b51 Bump aputils version 2023-07-10 21:52:39 +03:00
8d5b097ac4 Add donation link 2023-02-26 05:27:32 +00:00
8c49d92aea added version info 2023-02-10 01:15:20 +00:00
d06f51fca6 上傳檔案到「relay」
added version info
2023-02-10 01:14:32 +00:00
818a3573ae Merge pull request 'localization' (#1) from pch_xyz-patch-1 into master
Reviewed-on: #1
2023-02-10 00:54:07 +00:00
3dea5c030b localization 2023-02-10 00:51:40 +00:00
Izalia Mae
15b1324df2 Merge branch 'zen-master-patch-50595' into 'master'
Do not check instance's actor.type in case of Pleroma/Akkoma

See merge request pleroma/relay!50
2023-01-11 03:43:59 +00:00
Dmytro Poltavchenko
006efc1ba4 Do not check instance's actor.type in case of Pleroma/Akkoma 2023-01-08 00:23:36 +00:00
Izalia Mae
f4698aa4dc fix RuntimeError when running commands involving http client 2022-12-29 07:27:35 -05:00
Izalia Mae
0940921383 handle more client connection errors 2022-12-26 02:02:57 -05:00
Izalia Mae
af7fcc66fd fix missing modules when building via pyinstaller 2022-12-11 09:15:03 -05:00
Izalia Mae
bbdc151ed3 Merge branch 'dev' into 'master'
Version 0.2.4

See merge request pleroma/relay!46
2022-12-11 00:01:17 +00:00
Izalia Mae
04368c782d replace aputils git url with tar.gz 2022-12-10 02:44:07 -05:00
Izalia Mae
a742e7fb30 update setup.cfg and requirements.txt
* move deps to requirements.txt
* reference deps from requirements.txt in setup.cfg
* bump minimum python version to 3.7
* set version in setup.cfg from attribute
2022-12-10 02:10:56 -05:00
Izalia Mae
17f3e6be55 version 0.2.4 2022-12-08 04:17:17 -05:00
Izalia Mae
0e45763eff remove unnecessary config update 2022-12-08 03:53:13 -05:00
Izalia Mae
3968799d6f make sure exceptions don't bring down workers 2022-12-08 03:51:10 -05:00
Izalia Mae
aa8090eebb don't prompt for ignored settings in docker instances 2022-12-08 03:31:47 -05:00
Izalia Mae
f287b84ea3 update aputils 2022-12-07 23:23:13 -05:00
Izalia Mae
dc74bfb588 force certain config values in docker installs 2022-12-07 23:16:48 -05:00
Izalia Mae
e281a06e7f correctly call aputils.Signer.new 2022-12-07 23:15:54 -05:00
Izalia Mae
8f16cab048 prevent errors in post and fetch_nodeinfo 2022-12-07 23:15:31 -05:00
Izalia Mae
7d37ec8145 remove await from push_message calls and reject non-system actors 2022-12-04 04:40:40 -05:00
Izalia Mae
9f58c88e9f Fix NameError when getting nodeinfo software name in processors 2022-12-04 04:16:50 -05:00
Izalia Mae
6b86bb7d98 remove leftover semaphore property 2022-12-04 02:13:13 -05:00
Izalia Mae
90234a9724 move apkeys out of RelayConfig and rename relay_software_names 2022-12-04 01:20:17 -05:00
Izalia Mae
b0851c0652 remove http_debug 2022-12-04 01:15:28 -05:00
Izalia Mae
eab8a31001 document new commands 2022-12-04 01:12:58 -05:00
Izalia Mae
3b89aa5e84 sort out cli
added `whitelist import` command which adds all current inboxes to the whitelist
added `config list`
fixed a few errors
2022-12-04 01:09:45 -05:00
Izalia Mae
f7e1c6b0b8 make sure db config is a string when saving 2022-12-02 11:43:39 -05:00
Izalia Mae
dcca1eb0dc fix HttpClient fetch_nodeinfo and get 2022-12-02 00:52:15 -05:00
Izalia Mae
d5b9053f71 replace various classes with aputils classes 2022-12-02 00:50:57 -05:00
Izalia Mae
d172439fac update aputils 2022-12-02 00:11:22 -05:00
Izalia Mae
1a7abb4ecb fix distill_inboxes 2022-11-29 17:41:04 -05:00
Izalia Mae
5397bb4653 only use hs2019 for mastodon 2022-11-27 17:25:54 -05:00
Izalia Mae
a640db8f06 update list of active relay software 2022-11-26 23:41:57 -05:00
Izalia Mae
ce9e0c4d00 remove unnecessary print 2022-11-26 23:11:51 -05:00
Izalia Mae
335146a970 fix NameError in cli_setup 2022-11-26 23:01:18 -05:00
Izalia Mae
27914a7d27 Merge branch 'master' into dev 2022-11-26 22:53:43 -05:00
Izalia Mae
5d01211a34 add aputils module for hs2019 support 2022-11-26 22:16:14 -05:00
Izalia Mae
130111c847 update documentation 2022-11-26 20:53:06 -05:00
Izalia Mae
10301ecbde update example config file 2022-11-26 20:25:20 -05:00
Izalia Mae
15b314922c fix running via docker 2022-11-26 19:59:20 -05:00
Izalia Mae
b85b4ab80b create HttpClient class to avoid creating a new session every request 2022-11-26 18:56:34 -05:00
Izalia Mae
32764a1f93 make sure domain key exists for inboxes 2022-11-25 13:39:52 -05:00
Izalia Mae
fbe5746a18 fix NameError in cli_whitelist_remove 2022-11-25 13:29:45 -05:00
Izalia Mae
017363ecd5 fix nodeinfo fetching in run_processor 2022-11-25 13:19:29 -05:00
Izalia Mae
8541f63762 add timeout option to misc.request 2022-11-24 16:01:23 -05:00
Izalia Mae
ca36a765ea Merge branch 'fish-master-patch-76139' into 'master'
fix host check in setup

See merge request pleroma/relay!43
2022-11-24 06:24:01 +00:00
GQ Qin
6a3a35182e fix host check in setup 2022-11-23 03:46:34 +00:00
Izalia Mae
da56d4bb61 add extra logging in misc.request 2022-11-22 18:11:41 -05:00
Izalia Mae
a838e4324b fix NameError in inbox 2022-11-22 18:09:25 -05:00
Izalia Mae
242052386e use correct actor variable for cli_inbox_follow 2022-11-20 22:24:36 -05:00
Izalia Mae
395971914b organize manage.py 2022-11-20 06:24:33 -05:00
Izalia Mae
c96640bfd7 add config cli commands 2022-11-20 06:14:37 -05:00
Izalia Mae
9839da906c add optional push worker threads 2022-11-20 05:50:14 -05:00
Izalia Mae
c049657765 fetch nodeinfo software name on inbox request instead of startup 2022-11-20 05:22:57 -05:00
Izalia Mae
ffe14bead3 ignore account Deletes 2022-11-20 05:12:11 -05:00
Izalia Mae
85c4df7d8c remove unecessary method 2022-11-18 16:57:34 -05:00
Izalia Mae
ba9f2718aa use new request properties and only fetch nodeinfo on follow 2022-11-18 16:41:14 -05:00
Izalia Mae
4a8a8da740 add software kwarg to RelayDatabase.add_inbox 2022-11-18 16:39:53 -05:00
Izalia Mae
306b526808 add properties to aiohttp.web.Request 2022-11-18 16:38:39 -05:00
Izalia Mae
4ea6a040fb optimize RelayDatabase.get_inbox 2022-11-18 14:36:30 -05:00
Izalia Mae
9369b598fa add software name for inboxes 2022-11-18 14:10:39 -05:00
Izalia Mae
d4955828d4 return Nodeinfo object from fetch_nodeinfo 2022-11-18 13:45:26 -05:00
Izalia Mae
6e494ee671 Merge branch 'dev' into 'master'
v0.2.3

See merge request pleroma/relay!42
2022-11-18 17:39:31 +00:00
Izalia Mae
22b6e6b406 cleanup 2022-11-18 11:58:27 -05:00
Izalia Mae
6960c8d6c0 views.webfinger: return 400 error on missing resource 2022-11-18 11:50:12 -05:00
Izalia Mae
2b2e311be4 update example config 2022-11-18 06:26:58 -05:00
Izalia Mae
d08bd6625a use signature keyid instead of object actor to fetch actor 2022-11-17 16:30:56 -05:00
Izalia Mae
d2b243d88a await misc.request in handle_follow 2022-11-16 14:22:50 -05:00
Izalia Mae
e3b06d29ab ignore signals that don't exist 2022-11-16 13:26:47 -05:00
Izalia Mae
b87e52347b add spec file for building with pyinstaller 2022-11-16 11:18:31 -05:00
Izalia Mae
ef5d4bc579 only fetch commit hash if in running from git repo 2022-11-16 10:41:00 -05:00
Izalia Mae
8fd712c849 always fetch nodeinfo software name 2022-11-16 10:31:21 -05:00
Izalia Mae
c88e4e748a version bump 2022-11-16 10:26:08 -05:00
Izalia Mae
d615380610 Merge branch 'dev' of ssh://pleroma/pleroma/relay into dev 2022-11-16 10:24:35 -05:00
Izalia Mae
689fa1f8b4 Merge branch 'fix-newerror' into 'dev'
Fix Response.new_error

See merge request pleroma/relay!40
2022-11-16 15:23:58 +00:00
Izalia Mae
ec325f9f08 skip raising a KeyError on missing actor 2022-11-16 09:12:23 -05:00
Izalia Mae
4bdd2b031b prevent error in inbox 2022-11-16 09:10:52 -05:00
Jeong Arm
e6d7c60a5a Fix Response.new_error 2022-11-13 15:00:53 +09:00
Izalia Mae
7732a860e9 use right variable for inbox 2022-11-10 13:08:25 -05:00
Izalia Mae
3305a25da4 create View class and fix Response.new_error 2022-11-10 12:40:48 -05:00
Izalia Mae
c1c4b24b0a add ability to change cache size 2022-11-10 12:39:37 -05:00
Izalia Mae
f397e10b04 reset config on load 2022-11-10 12:38:08 -05:00
Izalia Mae
78ce1763e0 fix a couple nodeinfo values 2022-11-09 06:11:16 -05:00
Izalia Mae
ff95a3033d create Response class 2022-11-09 05:58:35 -05:00
Izalia Mae
6af9c8e6fe add follow request management methods to database 2022-11-09 04:54:46 -05:00
Izalia Mae
0b9281bec1 make sure sub-dicts in DotDict are DotDict objects 2022-11-09 04:35:57 -05:00
Izalia Mae
76476d1d03 add missing import 2022-11-07 16:14:00 -05:00
Izalia Mae
b275b7cd0b remove (un)follow_remote_actor 2022-11-07 09:53:04 -05:00
Izalia Mae
58ebefa3bd fix WKNodeinfo.get_url 2022-11-07 08:24:03 -05:00
Izalia Mae
e3bf4258aa create WKNodeinfo class and add nodeinfo 2.1 path 2022-11-07 08:18:25 -05:00
Izalia Mae
8d17749a50 create Application class 2022-11-07 07:54:32 -05:00
Izalia Mae
70e4870ba9 remove run_in_loop function 2022-11-07 05:40:08 -05:00
Izalia Mae
c66f9d34b3 create Message class 2022-11-07 05:30:13 -05:00
Izalia Mae
3b85e2c2f2 move DotDict to misc 2022-11-06 01:11:54 -05:00
Izalia Mae
f713f54306 announce forwarded messages 2022-11-06 01:11:36 -05:00
Izalia Mae
dcb7980c50 prevent old unfollows from booting instances 2022-11-05 22:15:37 -04:00
Izalia Mae
4d121adaa2 forward all non-Follow undos 2022-11-05 20:15:40 -04:00
Izalia Mae
c0d55cebb0 cache activity id for forwards 2022-11-05 20:10:01 -04:00
Izalia Mae
8ca198b611 simplify misc.request 2022-11-05 20:07:44 -04:00
Izalia Mae
729477820f Merge branch 'dev' into 'master'
v0.2.2

See merge request pleroma/relay!37
2022-08-26 15:21:56 +00:00
Izalia Mae
b6f311c42d version bump 2022-08-12 16:38:33 -04:00
Izalia Mae
6fcaf47f39 fix debug logging for distill_object_id 2022-08-12 15:43:24 -04:00
Izalia Mae
59a05224ff add LOG_FILE env var 2022-08-12 03:22:30 -04:00
Izalia Mae
fc7de1a3bc use proper accept header for nodeinfo fetching 2022-08-12 02:36:08 -04:00
Izalia Mae
a2b0b2f548 Merge branch 'master' into 'master'
version bump

See merge request pleroma/relay!36
2022-06-06 12:40:03 +00:00
Izalia Mae
d93880d83f Merge branch 'master' of https://git.pleroma.social/pleroma/relay 2022-06-06 08:37:49 -04:00
Izalia Mae
4fc5d692bf exclude git hash if unavailable 2022-06-06 08:37:08 -04:00
Izalia Mae
4847c2bbc0 version bump 2022-06-06 08:29:06 -04:00
Izalia Mae
454f46f04b Merge branch 'master' into 'master'
Bugfixes and documentation clarification

See merge request pleroma/relay!35
2022-05-07 23:12:16 +00:00
Izalia Mae
d005ff8f48 add/remove inbox on cli inbox (un)follow 2022-05-06 18:08:55 -04:00
Izalia Mae
1c3b1b39e6 docs: don't assume activityrelay is in PATH 2022-05-06 17:23:44 -04:00
Izalia Mae
c24a0ce6d5 fix actor unfollowing and simplify (un)following 2022-05-06 16:10:34 -04:00
Izalia Mae
169c7af822 Merge branch 'fix' into 'master'
fixes

See merge request pleroma/relay!34
2022-05-06 08:27:31 +00:00
Izalia Mae
30a3b92f26 fixes 2022-05-06 04:12:12 -04:00
Izalia Mae
7eac3609a6 Merge branch 'rework' into 'master'
Reorganize codebase

See merge request pleroma/relay!33
2022-05-06 07:04:51 +00:00
Izalia Mae
b6494849b5 Reorganize codebase 2022-05-06 07:04:51 +00:00
Izalia Mae
5c5f212d70 Merge branch 'asyncio-3.10' into 'master'
fix DeprecationWarnings on 3.10

See merge request pleroma/relay!32
2021-11-05 19:18:55 +00:00
Joel Beckmeyer
738e0a999f fix DeprecationWarnings on 3.10 2021-10-16 08:58:49 -04:00
44 changed files with 4236 additions and 1103 deletions

9
.gitignore vendored
View file

@ -94,8 +94,7 @@ ENV/
# Rope project settings
.ropeproject
viera.yaml
viera.jsonld
# config file
relay.yaml
# config and database
*.yaml
*.jsonld
*.sqlite3

View file

@ -1,11 +1,28 @@
FROM python:3-alpine
WORKDIR /workdir
# install build deps for pycryptodome and other c-based python modules
RUN apk add alpine-sdk autoconf automake libtool gcc
ADD requirements.txt /workdir/
RUN pip3 install -r requirements.txt
# add env var to let the relay know it's in a container
ENV DOCKER_RUNNING=true
ADD . /workdir/
# setup various container properties
VOLUME ["/data"]
CMD ["python", "-m", "relay"]
EXPOSE 8080/tcp
WORKDIR /opt/activityrelay
VOLUME ["/workdir/data"]
# install and update important python modules
RUN pip3 install -U setuptools wheel pip
# only copy necessary files
COPY relay ./relay
COPY LICENSE .
COPY README.md .
COPY requirements.txt .
COPY setup.cfg .
COPY setup.py .
COPY .git ./.git
# install relay deps
RUN pip3 install -r requirements.txt

1
MANIFEST.in Normal file
View file

@ -0,0 +1 @@
include data/statements.sql

View file

@ -10,72 +10,14 @@ Affero General Public License version 3 (AGPLv3) license. You can find a copy o
in this package as the `LICENSE` file.
## Setup
You need at least Python 3.6 (latest version of 3.x recommended) to make use of this software.
It simply will not run on older Python versions.
Download the project and install with pip (`pip3 install .`).
Copy `relay.yaml.example` to `relay.yaml` and edit it as appropriate:
$ cp relay.yaml.example relay.yaml
$ $EDITOR relay.yaml
Finally, you can launch the relay:
$ python3 -m relay
It is suggested to run this under some sort of supervisor, such as runit, daemontools,
s6 or systemd. Configuration of the supervisor is not covered here, as it is different
depending on which system you have available.
The bot runs a webserver, internally, on localhost at port 8080. This needs to be
forwarded by nginx or similar. The webserver is used to receive ActivityPub messages,
and needs to be secured with an SSL certificate inside nginx or similar. Configuration
of your webserver is not discussed here, but any guide explaining how to configure a
modern non-PHP web application should cover it.
## Getting Started
Normally, you would direct your LitePub instance software to follow the LitePub actor
found on the relay. In Pleroma this would be something like:
$ MIX_ENV=prod mix relay_follow https://your.relay.hostname/actor
Mastodon uses an entirely different relay protocol but supports LitePub relay protocol
as well when the Mastodon relay handshake is used. In these cases, Mastodon relay
clients should follow `http://your.relay.hostname/inbox` as they would with Mastodon's
own relay software.
$ MIX_ENV=prod mix relay_follow https://your.relay.hostname/actor
## Performance
## Documentation
Performance is very good, with all data being stored in memory and serialized to a
JSON-LD object graph. Worker coroutines are spawned in the background to distribute
the messages in a scatter-gather pattern. Performance is comparable to, if not
superior to, the Mastodon relay software, with improved memory efficiency.
## Management
You can perform a few management tasks such as peering or depeering other relays by
invoking the `relay.manage` module.
This will show the available management tasks:
$ python3 -m relay.manage
When following remote relays, you should use the `/actor` endpoint as you would in
Pleroma and other LitePub-compliant software.
## Docker
You can run ActivityRelay with docker. Edit `relay.yaml` so that the database
location is set to `./data/relay.jsonld` and then build and run the docker
image :
$ docker volume create activityrelay-data
$ docker build -t activityrelay .
$ docker run -d -p 8080:8080 -v activityrelay-data:/workdir/data activityrelay
To install or manage your relay, check the [documentation](docs/index.md)

3
dev-requirements.txt Normal file
View file

@ -0,0 +1,3 @@
flake8 == 7.0.0
pyinstaller == 6.3.0
pylint == 3.0

66
docker.sh Executable file
View file

@ -0,0 +1,66 @@
#!/usr/bin/env bash
case $1 in
install)
docker build -f Dockerfile -t activityrelay . && \
docker volume create activityrelay-data && \
docker run -it -p 8080:8080 -v activityrelay-data:/data --name activityrelay activityrelay
;;
uninstall)
docker stop activityrelay && \
docker container rm activityrelay && \
docker volume rm activityrelay-data && \
docker image rm activityrelay
;;
start)
docker start activityrelay
;;
stop)
docker stop activityrelay
;;
manage)
shift
docker exec -it activityrelay python3 -m relay "$@"
;;
shell)
docker exec -it activityrelay bash
;;
rescue)
docker run -it --rm --entrypoint bash -v activityrelay-data:/data activityrelay
;;
edit)
if [ -z ${EDITOR} ]; then
echo "EDITOR environmental variable not set"
exit
fi
CONFIG="/tmp/relay-$(date +"%T").yaml"
docker cp activityrelay:/data/relay.yaml $CONFIG && \
$EDITOR $CONFIG && \
docker cp $CONFIG activityrelay:/data/relay.yaml && \
rm $CONFIG
;;
*)
COLS="%-22s %s\n"
echo "Valid commands:"
printf "$COLS" "- start" "Run the relay in the background"
printf "$COLS" "- stop" "Stop the relay"
printf "$COLS" "- manage <cmd> [args]" "Run a relay management command"
printf "$COLS" "- edit" "Edit the relay's config in \$EDITOR"
printf "$COLS" "- shell" "Drop into a bash shell on the running container"
printf "$COLS" "- rescue" "Drop into a bash shell on a temp container with the data volume mounted"
printf "$COLS" "- install" "Build the image, create a new container and volume, and run relay setup"
printf "$COLS" "- uninstall" "Delete the relay image, container, and volume"
;;
esac

217
docs/commands.md Normal file
View file

@ -0,0 +1,217 @@
# Commands
There are a number of commands to manage your relay's database and config. You can add `--help` to
any category or command to get help on that specific option (ex. `activityrelay inbox --help`).
Note: `activityrelay` is only available via pip or pipx if `~/.local/bin` is in `$PATH`. If not,
use `python3 -m relay` if installed via pip or `~/.local/bin/activityrelay` if installed via pipx.
## Run
Run the relay.
activityrelay run
## Setup
Run the setup wizard to configure your relay.
activityrelay setup
## Convert
Convert the old config and jsonld to the new config and SQL backend. If the old config filename is
not specified, the config will get backed up as `relay.backup.yaml` before converting.
activityrelay convert --old-config relaycfg.yaml
## Edit Config
Open the config file in a text editor. If an editor is not specified with `--editor`, the default
editor will be used.
activityrelay edit-config --editor micro
## Config
Manage the relay config
activityrelay config
### List
List the current config key/value pairs
activityrelay config list
### Set
Set a value for a config option
activityrelay config set <key> <value>
## Inbox
Manage the list of subscribed instances.
### List
List the currently subscribed instances or relays.
activityrelay inbox list
### Add
Add an inbox to the database. If a domain is specified, it will default to `https://{domain}/inbox`.
If the added instance is not following the relay, expect errors when pushing messages.
activityrelay inbox add <inbox or domain>
### Remove
Remove an inbox from the database. An inbox or domain can be specified.
activityrelay inbox remove <inbox or domain>
### Follow
Follow an instance or relay actor and add it to the database. If a domain is specified, it will
default to `https://{domain}/actor`.
activityrelay inbox follow <actor or domain>
Note: The relay must be running for this command to work.
### Unfollow
Unfollow an instance or relay actor and remove it from the database. If the instance or relay does
not exist anymore, use the `inbox remove` command instead.
activityrelay inbox unfollow <domain, actor, or inbox>
Note: The relay must be running for this command to work.
## Whitelist
Manage the whitelisted domains.
### List
List the current whitelist.
activityrelay whitelist list
### Add
Add a domain to the whitelist.
activityrelay whitelist add <domain>
### Remove
Remove a domain from the whitelist.
activityrelay whitelist remove <domain>
### Import
Add all current inboxes to the whitelist.
activityrelay whitelist import
## Instance
Manage the instance ban list.
### List
List the currently banned instances.
activityrelay instance list
### Ban
Add an instance to the ban list. If the instance is currently subscribed, it will be removed from
the inbox list.
activityrelay instance ban <domain>
### Unban
Remove an instance from the ban list.
activityrelay instance unban <domain>
### Update
Update the ban reason or note for an instance ban.
activityrelay instance update bad.example.com --reason "the baddest reason"
## Software
Manage the software ban list. To get the correct name, check the software's nodeinfo endpoint.
You can find it at `nodeinfo['software']['name']`.
### List
List the currently banned software.
activityrelay software list
### Ban
Add a software name to the ban list.
If `-f` or `--fetch-nodeinfo` is set, treat the name as a domain and try to fetch the software
name via nodeinfo.
If the name is `RELAYS` (case-sensitive), add all known relay software names to the list.
activityrelay software ban [-f/--fetch-nodeinfo] <name, domain, or RELAYS>
### Unban
Remove a software name from the ban list.
If `-f` or `--fetch-nodeinfo` is set, treat the name as a domain and try to fetch the software
name via nodeinfo.
If the name is `RELAYS` (case-sensitive), remove all known relay software names from the list.
activityrelay software unban [-f/--fetch-nodeinfo] <name, domain, or RELAYS>
### Update
Update the ban reason or note for a software ban. Either `--reason` and/or `--note` must be
specified.
activityrelay software update relay.example.com --reason "begone relay"

137
docs/configuration.md Normal file
View file

@ -0,0 +1,137 @@
# Configuration
## General
### Domain
Hostname the relay will be hosted on.
domain: relay.example.com
### Listener
The address and port the relay will listen on. If the reverse proxy (nginx, apache, caddy, etc)
is running on the same host, it is recommended to change `listen` to `localhost` if the reverse
proxy is on the same host.
listen: 0.0.0.0
port: 8080
### Push Workers
The relay can be configured to use threads to push messages out. For smaller relays, this isn't
necessary, but bigger ones (>100 instances) will want to set this to the number of available cpu
threads.
workers: 0
### Database type
SQL database backend to use. Valid values are `sqlite` or `postgres`.
database_type: sqlite
### Cache type
Cache backend to use. Valid values are `database` or `redis`
cache_type: database
### Sqlite File Path
Path to the sqlite database file. If the path is not absolute, it is relative to the config file.
directory.
sqlite_path: relay.jsonld
## Postgresql
In order to use the Postgresql backend, the user and database need to be created first.
sudo -u postgres psql -c "CREATE USER activityrelay WITH PASSWORD SomeSecurePassword"
sudo -u postgres psql -c "CREATE DATABASE activityrelay OWNER activityrelay"
### Database Name
Name of the database to use.
name: activityrelay
### Host
Hostname, IP address, or unix socket the server is hosted on.
host: /var/run/postgresql
### Port
Port number the server is listening on.
port: 5432
### Username
User to use when logging into the server.
user: null
### Password
Password for the specified user.
pass: null
## Redis
### Host
Hostname, IP address, or unix socket the server is hosted on.
host: /var/run/postgresql
### Port
Port number the server is listening on.
port: 5432
### Username
User to use when logging into the server.
user: null
### Password
Password for the specified user.
pass: null
### Database Number
Number of the database to use.
database: 0
### Prefix
Text to prefix every key with. It cannot contain a `:` character.
prefix: activityrelay

9
docs/index.md Normal file
View file

@ -0,0 +1,9 @@
# ActivityRelay Documentation
ActivityRelay is a small ActivityPub server that relays messages to subscribed instances.
[Installation](installation.md)
[Configuration](configuration.md)
[Commands](commands.md)

67
docs/installation.md Normal file
View file

@ -0,0 +1,67 @@
# Installation
There are a few ways to install ActivityRelay. Follow one of the methods below, setup a reverse
proxy, and setup the relay to run via a supervisor. Example configs for caddy, nginx, and systemd
in `installation/`
## Pipx
Pipx uses pip and a custom venv implementation to automatically install modules into a Python
environment and is the recommended method. Install pipx if it isn't installed already. Check out
the [official pipx docs](https://pypa.github.io/pipx/installation/) for more in-depth instructions.
python3 -m pip install pipx
Now simply install ActivityRelay directly from git
pipx install git+https://git.pleroma.social/pleroma/relay@0.2.5
Or from a cloned git repo.
pipx install .
Once finished, you can set up the relay via the setup command. It will ask a few questions to fill
out config options for your relay
~/.local/bin/activityrelay setup
Finally start it up with the run command.
~/.local/bin/activityrelay run
Note: Pipx requires python 3.7+. If your distro doesn't have a compatible version of python, it can
be installed via [pyenv](https://github.com/pyenv/pyenv).
## Pip
The instructions for installation via pip are very similar to pipx. Installation can be done from
git
python3 -m pip install git+https://git.pleroma.social/pleroma/relay@0.2.5
or a cloned git repo.
python3 -m pip install .
Now run the configuration wizard
python3 -m relay setup
And start the relay when finished
python3 -m relay run
## Docker
Installation and management via Docker can be handled with the `docker.sh` script. To install
ActivityRelay, run the install command. Once the image is built and the container is created,
your will be asked to fill out some config options for your relay.
./docker.sh install
Finally start it up. It will be listening on TCP port 8080.
./docker.sh start

View file

@ -28,14 +28,14 @@ server {
# logging, mostly for debug purposes. Disable if you wish.
access_log /srv/www/relay.<yourdomain>/logs/access.log;
error_log /srv/www/relay.<yourdomain>/logs/error.log;
ssl_protocols TLSv1.2;
ssl_ciphers EECDH+AESGCM:EECDH+AES;
ssl_ecdh_curve secp384r1;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
# ssl certs.
# ssl certs.
ssl_certificate /usr/local/etc/letsencrypt/live/relay.<yourdomain>/fullchain.pem;
ssl_certificate_key /usr/local/etc/letsencrypt/live/relay.<yourdomain>/privkey.pem;
@ -48,7 +48,7 @@ server {
# sts, change if you care.
# add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";
# uncomment this to use a static page in your webroot for your root page.
#location = / {
# index index.html;

View file

@ -3,7 +3,7 @@ Description=ActivityPub Relay
[Service]
WorkingDirectory=/home/relay/relay
ExecStart=/usr/bin/python3 -m relay
ExecStart=/usr/bin/python3 -m relay run
[Install]
WantedBy=multi-user.target

View file

@ -1,3 +1,56 @@
[build-system]
requires = ["setuptools","wheel"]
build-backend = 'setuptools.build_meta'
[tool.pylint.main]
jobs = 0
persistent = true
load-plugins = [
"pylint.extensions.code_style",
"pylint.extensions.comparison_placement",
"pylint.extensions.confusing_elif",
"pylint.extensions.for_any_all",
"pylint.extensions.consider_ternary_expression",
"pylint.extensions.bad_builtin",
"pylint.extensions.dict_init_mutate",
"pylint.extensions.check_elif",
"pylint.extensions.empty_comment",
"pylint.extensions.private_import",
"pylint.extensions.redefined_variable_type",
"pylint.extensions.no_self_use",
"pylint.extensions.overlapping_exceptions",
"pylint.extensions.set_membership",
"pylint.extensions.typing"
]
[tool.pylint.design]
max-args = 10
max-attributes = 100
[tool.pylint.format]
indent-str = "\t"
indent-after-paren = 1
max-line-length = 100
single-line-if-stmt = true
[tool.pylint.messages_control]
disable = [
"fixme",
"broad-exception-caught",
"cyclic-import",
"global-statement",
"invalid-name",
"missing-module-docstring",
"too-few-public-methods",
"too-many-public-methods",
"too-many-return-statements",
"wrong-import-order",
"missing-function-docstring",
"missing-class-docstring",
"consider-using-namedtuple-or-dataclass",
"confusing-consecutive-elif"
]

48
relay.spec Normal file
View file

@ -0,0 +1,48 @@
# -*- mode: python ; coding: utf-8 -*-
block_cipher = None
a = Analysis(
['relay/__main__.py'],
pathex=[],
binaries=[],
datas=[
('relay/data', 'relay/data')
],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False,
)
pyz = PYZ(a.pure, a.zipped_data, cipher=block_cipher)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='activityrelay',
icon=None,
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=True,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
)

View file

@ -1,30 +1,59 @@
# this is the path that the object graph will get dumped to (in JSON-LD format),
# you probably shouldn't change it, but you can if you want.
db: relay.jsonld
# [string] Domain the relay will be hosted on
domain: relay.example.com
# Listener
# [string] Address the relay will listen on
listen: 0.0.0.0
# [integer] Port the relay will listen on
port: 8080
# Note
note: "Make a note about your instance here."
# [integer] Number of push workers to start
workers: 8
# this section is for ActivityPub
ap:
# this is used for generating activitypub messages, as well as instructions for
# linking AP identities. it should be an SSL-enabled domain reachable by https.
host: 'relay.example.com'
blocked_instances:
- 'bad-instance.example.com'
- 'another-bad-instance.example.com'
whitelist_enabled: false
whitelist:
- 'good-instance.example.com'
- 'another.good-instance.example.com'
# uncomment the lines below to prevent certain activitypub software from posting
# to the relay (all known relays by default). this uses the software name in nodeinfo
#blocked_software:
#- 'activityrelay'
#- 'aoderelay'
#- 'social.seattle.wa.us-relay'
#- 'unciarelay'
# [string] Database backend to use. Valid values: sqlite, postgres
database_type: sqlite
# [string] Cache backend to use. Valid values: database, redis
cache_type: database
# [string] Path to the sqlite database file if the sqlite backend is in use
sqlite_path: relay.sqlite3
# settings for the postgresql backend
postgres:
# [string] hostname or unix socket to connect to
host: /var/run/postgresql
# [integer] port of the server
port: 5432
# [string] username to use when logging into the server (default is the current system username)
user: null
# [string] password of the user
pass: null
# [string] name of the database to use
name: activityrelay
# settings for the redis caching backend
redis:
# [string] hostname or unix socket to connect to
host: localhost
# [integer] port of the server
port: 6379
# [string] username to use when logging into the server
user: null
# [string] password for the server
pass: null
# [integer] database number to use
database: 0
# [string] prefix for keys
prefix: activityrelay

View file

@ -1,58 +1 @@
from . import logging
import asyncio
import aiohttp
import aiohttp.web
import yaml
import argparse
parser = argparse.ArgumentParser(
description="A generic LitePub relay (works with all LitePub consumers and Mastodon).",
prog="python -m relay")
parser.add_argument("-c", "--config", type=str, default="relay.yaml",
metavar="<path>", help="the path to your config file")
args = parser.parse_args()
def load_config():
with open(args.config) as f:
options = {}
## Prevent a warning message for pyyaml 5.1+
if getattr(yaml, 'FullLoader', None):
options['Loader'] = yaml.FullLoader
yaml_file = yaml.load(f, **options)
config = {
'db': yaml_file.get('db', 'relay.jsonld'),
'listen': yaml_file.get('listen', '0.0.0.0'),
'port': int(yaml_file.get('port', 8080)),
'note': yaml_file.get('note', 'Make a note about your instance here.'),
'ap': {
'blocked_software': [v.lower() for v in yaml_file['ap'].get('blocked_software', [])],
'blocked_instances': yaml_file['ap'].get('blocked_instances', []),
'host': yaml_file['ap'].get('host', 'localhost'),
'whitelist': yaml_file['ap'].get('whitelist', []),
'whitelist_enabled': yaml_file['ap'].get('whitelist_enabled', False)
}
}
return config
CONFIG = load_config()
from .http_signatures import http_signatures_middleware
app = aiohttp.web.Application(middlewares=[
http_signatures_middleware
])
from . import database
from . import actor
from . import webfinger
from . import default
from . import nodeinfo
from . import http_stats
__version__ = '0.2.5'

View file

@ -1,55 +1,5 @@
import asyncio
import aiohttp.web
import logging
import platform
import sys
import Crypto
import time
from . import app, CONFIG
def crypto_check():
vers_split = platform.python_version().split('.')
pip_command = 'pip3 uninstall pycrypto && pip3 install pycryptodome'
if Crypto.__version__ != '2.6.1':
return
if int(vers_split[1]) > 7 and Crypto.__version__ == '2.6.1':
logging.error('PyCrypto is broken on Python 3.8+. Please replace it with pycryptodome before running again. Exiting in 10 sec...')
logging.error(pip_command)
time.sleep(10)
sys.exit()
else:
logging.warning('PyCrypto is old and should be replaced with pycryptodome')
logging.warning(pip_command)
async def start_webserver():
runner = aiohttp.web.AppRunner(app)
await runner.setup()
try:
listen = CONFIG['listen']
except:
listen = 'localhost'
try:
port = CONFIG['port']
except:
port = 8080
logging.info('Starting webserver at {listen}:{port}'.format(listen=listen,port=port))
site = aiohttp.web.TCPSite(runner, listen, port)
await site.start()
def main():
loop = asyncio.get_event_loop()
asyncio.ensure_future(start_webserver())
loop.run_forever()
from relay.manage import main
if __name__ == '__main__':
crypto_check()
main()
main()

View file

@ -1,347 +0,0 @@
import aiohttp
import aiohttp.web
import asyncio
import logging
import uuid
import re
import simplejson as json
import cgi
import datetime
from urllib.parse import urlsplit
from Crypto.PublicKey import RSA
from cachetools import LFUCache
from . import app, CONFIG
from .database import DATABASE
from .http_debug import http_debug
from .remote_actor import fetch_actor
from .http_signatures import sign_headers, generate_body_digest
# generate actor keys if not present
if "actorKeys" not in DATABASE:
logging.info("No actor keys present, generating 4096-bit RSA keypair.")
privkey = RSA.generate(4096)
pubkey = privkey.publickey()
DATABASE["actorKeys"] = {
"publicKey": pubkey.exportKey('PEM').decode('utf-8'),
"privateKey": privkey.exportKey('PEM').decode('utf-8')
}
PRIVKEY = RSA.importKey(DATABASE["actorKeys"]["privateKey"])
PUBKEY = PRIVKEY.publickey()
AP_CONFIG = CONFIG['ap']
CACHE_SIZE = CONFIG.get('cache-size', 16384)
CACHE = LFUCache(CACHE_SIZE)
sem = asyncio.Semaphore(500)
async def actor(request):
data = {
"@context": "https://www.w3.org/ns/activitystreams",
"endpoints": {
"sharedInbox": "https://{}/inbox".format(request.host)
},
"followers": "https://{}/followers".format(request.host),
"following": "https://{}/following".format(request.host),
"inbox": "https://{}/inbox".format(request.host),
"name": "ActivityRelay",
"type": "Application",
"id": "https://{}/actor".format(request.host),
"publicKey": {
"id": "https://{}/actor#main-key".format(request.host),
"owner": "https://{}/actor".format(request.host),
"publicKeyPem": DATABASE["actorKeys"]["publicKey"]
},
"summary": "ActivityRelay bot",
"preferredUsername": "relay",
"url": "https://{}/actor".format(request.host)
}
return aiohttp.web.json_response(data, content_type='application/activity+json')
app.router.add_get('/actor', actor)
get_actor_inbox = lambda actor: actor.get('endpoints', {}).get('sharedInbox', actor['inbox'])
async def push_message_to_actor(actor, message, our_key_id):
inbox = get_actor_inbox(actor)
url = urlsplit(inbox)
# XXX: Digest
data = json.dumps(message)
headers = {
'(request-target)': 'post {}'.format(url.path),
'Content-Length': str(len(data)),
'Content-Type': 'application/activity+json',
'User-Agent': 'ActivityRelay',
'Host': url.netloc,
'Digest': 'SHA-256={}'.format(generate_body_digest(data)),
'Date': datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
}
headers['signature'] = sign_headers(headers, PRIVKEY, our_key_id)
headers.pop('(request-target)')
headers.pop('Host')
logging.debug('%r >> %r', inbox, message)
global sem
async with sem:
try:
async with aiohttp.ClientSession(trace_configs=[http_debug()]) as session:
async with session.post(inbox, data=data, headers=headers) as resp:
if resp.status == 202:
return
resp_payload = await resp.text()
logging.debug('%r >> resp %r', inbox, resp_payload)
except Exception as e:
logging.info('Caught %r while pushing to %r.', e, inbox)
async def fetch_nodeinfo(domain):
headers = {'Accept': 'application/json'}
nodeinfo_url = None
wk_nodeinfo = await fetch_actor(f'https://{domain}/.well-known/nodeinfo', headers=headers)
if not wk_nodeinfo:
return
for link in wk_nodeinfo.get('links', ''):
if link['rel'] == 'http://nodeinfo.diaspora.software/ns/schema/2.0':
nodeinfo_url = link['href']
break
if not nodeinfo_url:
return
nodeinfo_data = await fetch_actor(nodeinfo_url, headers=headers)
software = nodeinfo_data.get('software')
return software.get('name') if software else None
async def follow_remote_actor(actor_uri):
actor = await fetch_actor(actor_uri)
if not actor:
logging.info('failed to fetch actor at: %r', actor_uri)
return
if AP_CONFIG['whitelist_enabled'] is True and urlsplit(actor_uri).hostname not in AP_CONFIG['whitelist']:
logging.info('refusing to follow non-whitelisted actor: %r', actor_uri)
return
logging.info('following: %r', actor_uri)
message = {
"@context": "https://www.w3.org/ns/activitystreams",
"type": "Follow",
"to": [actor['id']],
"object": actor['id'],
"id": "https://{}/activities/{}".format(AP_CONFIG['host'], uuid.uuid4()),
"actor": "https://{}/actor".format(AP_CONFIG['host'])
}
await push_message_to_actor(actor, message, "https://{}/actor#main-key".format(AP_CONFIG['host']))
async def unfollow_remote_actor(actor_uri):
actor = await fetch_actor(actor_uri)
if not actor:
logging.info('failed to fetch actor at: %r', actor_uri)
return
logging.info('unfollowing: %r', actor_uri)
message = {
"@context": "https://www.w3.org/ns/activitystreams",
"type": "Undo",
"to": [actor['id']],
"object": {
"type": "Follow",
"object": actor_uri,
"actor": actor['id'],
"id": "https://{}/activities/{}".format(AP_CONFIG['host'], uuid.uuid4())
},
"id": "https://{}/activities/{}".format(AP_CONFIG['host'], uuid.uuid4()),
"actor": "https://{}/actor".format(AP_CONFIG['host'])
}
await push_message_to_actor(actor, message, "https://{}/actor#main-key".format(AP_CONFIG['host']))
tag_re = re.compile(r'(<!--.*?-->|<[^>]*>)')
def strip_html(data):
no_tags = tag_re.sub('', data)
return cgi.escape(no_tags)
def distill_inboxes(actor, object_id):
global DATABASE
origin_hostname = urlsplit(object_id).hostname
inbox = get_actor_inbox(actor)
targets = [target for target in DATABASE.get('relay-list', []) if target != inbox]
targets = [target for target in targets if urlsplit(target).hostname != origin_hostname]
hostnames = [urlsplit(target).hostname for target in targets]
assert inbox not in targets
assert origin_hostname not in hostnames
return targets
def distill_object_id(activity):
logging.debug('>> determining object ID for %r', activity['object'])
obj = activity['object']
if isinstance(obj, str):
return obj
return obj['id']
async def handle_relay(actor, data, request):
global CACHE
object_id = distill_object_id(data)
if object_id in CACHE:
logging.debug('>> already relayed %r as %r', object_id, CACHE[object_id])
return
activity_id = "https://{}/activities/{}".format(request.host, uuid.uuid4())
message = {
"@context": "https://www.w3.org/ns/activitystreams",
"type": "Announce",
"to": ["https://{}/followers".format(request.host)],
"actor": "https://{}/actor".format(request.host),
"object": object_id,
"id": activity_id
}
logging.debug('>> relay: %r', message)
inboxes = distill_inboxes(actor, object_id)
futures = [push_message_to_actor({'inbox': inbox}, message, 'https://{}/actor#main-key'.format(request.host)) for inbox in inboxes]
asyncio.ensure_future(asyncio.gather(*futures))
CACHE[object_id] = activity_id
async def handle_forward(actor, data, request):
object_id = distill_object_id(data)
logging.debug('>> Relay %r', data)
inboxes = distill_inboxes(actor, object_id)
futures = [
push_message_to_actor(
{'inbox': inbox},
data,
'https://{}/actor#main-key'.format(request.host))
for inbox in inboxes]
asyncio.ensure_future(asyncio.gather(*futures))
async def handle_follow(actor, data, request):
global DATABASE
following = DATABASE.get('relay-list', [])
inbox = get_actor_inbox(actor)
if urlsplit(inbox).hostname in AP_CONFIG['blocked_instances']:
return
if inbox not in following:
following += [inbox]
DATABASE['relay-list'] = following
asyncio.ensure_future(follow_remote_actor(actor['id']))
message = {
"@context": "https://www.w3.org/ns/activitystreams",
"type": "Accept",
"to": [actor["id"]],
"actor": "https://{}/actor".format(request.host),
# this is wrong per litepub, but mastodon < 2.4 is not compliant with that profile.
"object": {
"type": "Follow",
"id": data["id"],
"object": "https://{}/actor".format(request.host),
"actor": actor["id"]
},
"id": "https://{}/activities/{}".format(request.host, uuid.uuid4()),
}
asyncio.ensure_future(push_message_to_actor(actor, message, 'https://{}/actor#main-key'.format(request.host)))
async def handle_undo(actor, data, request):
global DATABASE
child = data['object']
if child['type'] == 'Follow':
following = DATABASE.get('relay-list', [])
inbox = get_actor_inbox(actor)
if inbox in following:
following.remove(inbox)
DATABASE['relay-list'] = following
await unfollow_remote_actor(actor['id'])
processors = {
'Announce': handle_relay,
'Create': handle_relay,
'Delete': handle_forward,
'Follow': handle_follow,
'Undo': handle_undo,
'Update': handle_forward,
}
async def inbox(request):
data = await request.json()
instance = urlsplit(data['actor']).hostname
if AP_CONFIG['blocked_software']:
software = await fetch_nodeinfo(instance)
if software and software.lower() in AP_CONFIG['blocked_software']:
raise aiohttp.web.HTTPUnauthorized(body='relays have been blocked', content_type='text/plain')
if 'actor' not in data or not request['validated']:
raise aiohttp.web.HTTPUnauthorized(body='access denied', content_type='text/plain')
elif data['type'] != 'Follow' and 'https://{}/inbox'.format(instance) not in DATABASE['relay-list']:
raise aiohttp.web.HTTPUnauthorized(body='access denied', content_type='text/plain')
elif AP_CONFIG['whitelist_enabled'] is True and instance not in AP_CONFIG['whitelist']:
raise aiohttp.web.HTTPUnauthorized(body='access denied', content_type='text/plain')
actor = await fetch_actor(data["actor"])
actor_uri = 'https://{}/actor'.format(request.host)
logging.debug(">> payload %r", data)
processor = processors.get(data['type'], None)
if processor:
await processor(actor, data, request)
return aiohttp.web.Response(body=b'{}', content_type='application/activity+json')
app.router.add_post('/inbox', inbox)

224
relay/application.py Normal file
View file

@ -0,0 +1,224 @@
from __future__ import annotations
import asyncio
import os
import signal
import subprocess
import sys
import time
import typing
from aiohttp import web
from aputils.signer import Signer
from datetime import datetime, timedelta
from gunicorn.app.wsgiapp import WSGIApplication
from . import logger as logging
from .cache import get_cache
from .config import Config
from .database import get_database
from .http_client import HttpClient
from .misc import check_open_port
from .views import VIEWS
if typing.TYPE_CHECKING:
from collections.abc import Awaitable
from tinysql import Database, Row
from typing import Any
from .cache import Cache
from .misc import Message
# pylint: disable=unsubscriptable-object
class Application(web.Application):
DEFAULT: Application = None
def __init__(self, cfgpath: str, gunicorn: bool = False):
web.Application.__init__(self)
Application.DEFAULT = self
self['proc'] = None
self['signer'] = None
self['start_time'] = None
self['config'] = Config(cfgpath, load = True)
self['database'] = get_database(self.config)
self['client'] = HttpClient()
self['cache'] = get_cache(self)
if not gunicorn:
return
self.on_response_prepare.append(handle_access_log)
for path, view in VIEWS:
self.router.add_view(path, view)
@property
def cache(self) -> Cache:
return self['cache']
@property
def client(self) -> HttpClient:
return self['client']
@property
def config(self) -> Config:
return self['config']
@property
def database(self) -> Database:
return self['database']
@property
def signer(self) -> Signer:
return self['signer']
@signer.setter
def signer(self, value: Signer | str) -> None:
if isinstance(value, Signer):
self['signer'] = value
return
self['signer'] = Signer(value, self.config.keyid)
@property
def uptime(self) -> timedelta:
if not self['start_time']:
return timedelta(seconds=0)
uptime = datetime.now() - self['start_time']
return timedelta(seconds=uptime.seconds)
def push_message(self, inbox: str, message: Message, instance: Row) -> None:
asyncio.ensure_future(self.client.post(inbox, message, instance))
def run(self, dev: bool = False) -> None:
self.start(dev)
while self['proc'] and self['proc'].poll() is None:
time.sleep(0.1)
self.stop()
def set_signal_handler(self, startup: bool) -> None:
for sig in ('SIGHUP', 'SIGINT', 'SIGQUIT', 'SIGTERM'):
try:
signal.signal(getattr(signal, sig), self.stop if startup else signal.SIG_DFL)
# some signals don't exist in windows, so skip them
except AttributeError:
pass
def start(self, dev: bool = False) -> None:
if self['proc']:
return
if not check_open_port(self.config.listen, self.config.port):
logging.error('Server already running on %s:%s', self.config.listen, self.config.port)
return
cmd = [
sys.executable, '-m', 'gunicorn',
'relay.application:main_gunicorn',
'--bind', f'{self.config.listen}:{self.config.port}',
'--worker-class', 'aiohttp.GunicornWebWorker',
'--workers', str(self.config.workers),
'--env', f'CONFIG_FILE={self.config.path}'
]
if dev:
cmd.append('--reload')
self.set_signal_handler(True)
self['proc'] = subprocess.Popen(cmd) # pylint: disable=consider-using-with
def stop(self, *_) -> None:
if not self['proc']:
return
self['proc'].terminate()
time_wait = 0.0
while self['proc'].poll() is None:
time.sleep(0.1)
time_wait += 0.1
if time_wait >= 5.0:
self['proc'].kill()
break
self.set_signal_handler(False)
self['proc'] = None
# not used, but keeping just in case
class GunicornRunner(WSGIApplication):
def __init__(self, app: Application):
self.app = app
self.app_uri = 'relay.application:main_gunicorn'
self.options = {
'bind': f'{app.config.listen}:{app.config.port}',
'worker_class': 'aiohttp.GunicornWebWorker',
'workers': app.config.workers,
'raw_env': f'CONFIG_FILE={app.config.path}'
}
WSGIApplication.__init__(self)
def load_config(self):
for key, value in self.options.items():
self.cfg.set(key, value)
def run(self):
logging.info('Starting webserver for %s', self.app.config.domain)
WSGIApplication.run(self)
async def handle_access_log(request: web.Request, response: web.Response) -> None:
address = request.headers.get(
'X-Forwarded-For',
request.headers.get(
'X-Real-Ip',
request.remote
)
)
logging.info(
'%s "%s %s" %i %i "%s"',
address,
request.method,
request.path,
response.status,
len(response.body),
request.headers.get('User-Agent', 'n/a')
)
async def main_gunicorn():
try:
app = Application(os.environ['CONFIG_FILE'], gunicorn = True)
except KeyError:
logging.error('Failed to set "CONFIG_FILE" environment. Trying to run without gunicorn?')
raise
return app

288
relay/cache.py Normal file
View file

@ -0,0 +1,288 @@
from __future__ import annotations
import json
import os
import typing
from abc import ABC, abstractmethod
from dataclasses import asdict, dataclass
from datetime import datetime, timezone
from redis import Redis
from .database import get_database
from .misc import Message, boolean
if typing.TYPE_CHECKING:
from typing import Any
from collections.abc import Callable, Iterator
from tinysql import Database
from .application import Application
# todo: implement more caching backends
BACKENDS: dict[str, Cache] = {}
CONVERTERS: dict[str, tuple[Callable, Callable]] = {
'str': (str, str),
'int': (str, int),
'bool': (str, boolean),
'json': (json.dumps, json.loads),
'message': (lambda x: x.to_json(), Message.parse)
}
def get_cache(app: Application) -> Cache:
return BACKENDS[app.config.ca_type](app)
def register_cache(backend: type[Cache]) -> type[Cache]:
BACKENDS[backend.name] = backend
return backend
def serialize_value(value: Any, value_type: str = 'str') -> str:
if isinstance(value, str):
return value
return CONVERTERS[value_type][0](value)
def deserialize_value(value: str, value_type: str = 'str') -> Any:
return CONVERTERS[value_type][1](value)
@dataclass
class Item:
namespace: str
key: str
value: Any
value_type: str
updated: datetime
def __post_init__(self):
if isinstance(self.updated, str):
self.updated = datetime.fromisoformat(self.updated)
@classmethod
def from_data(cls: type[Item], *args) -> Item:
data = cls(*args)
data.value = deserialize_value(data.value, data.value_type)
if not isinstance(data.updated, datetime):
data.updated = datetime.fromtimestamp(data.updated, tz = timezone.utc)
return data
def older_than(self, hours: int) -> bool:
delta = datetime.now(tz = timezone.utc) - self.updated
return (delta.total_seconds()) > hours * 3600
def to_dict(self) -> dict[str, Any]:
return asdict(self)
class Cache(ABC):
name: str = 'null'
def __init__(self, app: Application):
self.app = app
self.setup()
@abstractmethod
def get(self, namespace: str, key: str) -> Item:
...
@abstractmethod
def get_keys(self, namespace: str) -> Iterator[str]:
...
@abstractmethod
def get_namespaces(self) -> Iterator[str]:
...
@abstractmethod
def set(self, namespace: str, key: str, value: Any, value_type: str = 'key') -> Item:
...
@abstractmethod
def delete(self, namespace: str, key: str) -> None:
...
@abstractmethod
def setup(self) -> None:
...
def set_item(self, item: Item) -> Item:
return self.set(
item.namespace,
item.key,
item.value,
item.type
)
def delete_item(self, item: Item) -> None:
self.delete(item.namespace, item.key)
@register_cache
class SqlCache(Cache):
name: str = 'database'
def __init__(self, app: Application):
self._db = get_database(app.config)
Cache.__init__(self, app)
def get(self, namespace: str, key: str) -> Item:
params = {
'namespace': namespace,
'key': key
}
with self._db.connection() as conn:
with conn.exec_statement('get-cache-item', params) as cur:
if not (row := cur.one()):
raise KeyError(f'{namespace}:{key}')
row.pop('id', None)
return Item.from_data(*tuple(row.values()))
def get_keys(self, namespace: str) -> Iterator[str]:
with self._db.connection() as conn:
for row in conn.exec_statement('get-cache-keys', {'namespace': namespace}):
yield row['key']
def get_namespaces(self) -> Iterator[str]:
with self._db.connection() as conn:
for row in conn.exec_statement('get-cache-namespaces', None):
yield row['namespace']
def set(self, namespace: str, key: str, value: Any, value_type: str = 'str') -> Item:
params = {
'namespace': namespace,
'key': key,
'value': serialize_value(value, value_type),
'type': value_type,
'date': datetime.now(tz = timezone.utc)
}
with self._db.connection() as conn:
with conn.exec_statement('set-cache-item', params) as conn:
row = conn.one()
row.pop('id', None)
return Item.from_data(*tuple(row.values()))
def delete(self, namespace: str, key: str) -> None:
params = {
'namespace': namespace,
'key': key
}
with self._db.connection() as conn:
with conn.exec_statement('del-cache-item', params):
pass
def setup(self) -> None:
with self._db.connection() as conn:
with conn.exec_statement(f'create-cache-table-{self._db.type.name.lower()}', None):
pass
@register_cache
class RedisCache(Cache):
name: str = 'redis'
_rd: Redis
@property
def prefix(self) -> str:
return self.app.config.rd_prefix
def get_key_name(self, namespace: str, key: str) -> str:
return f'{self.prefix}:{namespace}:{key}'
def get(self, namespace: str, key: str) -> Item:
key_name = self.get_key_name(namespace, key)
if not (raw_value := self._rd.get(key_name)):
raise KeyError(f'{namespace}:{key}')
value_type, updated, value = raw_value.split(':', 2)
return Item.from_data(
namespace,
key,
value,
value_type,
datetime.fromtimestamp(float(updated), tz = timezone.utc)
)
def get_keys(self, namespace: str) -> Iterator[str]:
for key in self._rd.keys(self.get_key_name(namespace, '*')):
*_, key_name = key.split(':', 2)
yield key_name
def get_namespaces(self) -> Iterator[str]:
namespaces = []
for key in self._rd.keys(f'{self.prefix}:*'):
_, namespace, _ = key.split(':', 2)
if namespace not in namespaces:
namespaces.append(namespace)
yield namespace
def set(self, namespace: str, key: str, value: Any, value_type: str = 'key') -> None:
date = datetime.now(tz = timezone.utc).timestamp()
value = serialize_value(value, value_type)
self._rd.set(
self.get_key_name(namespace, key),
f'{value_type}:{date}:{value}'
)
def delete(self, namespace: str, key: str) -> None:
self._rd.delete(self.get_key_name(namespace, key))
def setup(self) -> None:
options = {
'client_name': f'ActivityRelay_{self.app.config.domain}',
'decode_responses': True,
'username': self.app.config.rd_user,
'password': self.app.config.rd_pass,
'db': self.app.config.rd_database
}
if os.path.exists(self.app.config.rd_host):
options['unix_socket_path'] = self.app.config.rd_host
else:
options['host'] = self.app.config.rd_host
options['port'] = self.app.config.rd_port
self._rd = Redis(**options)

304
relay/compat.py Normal file
View file

@ -0,0 +1,304 @@
from __future__ import annotations
import json
import os
import typing
import yaml
from functools import cached_property
from pathlib import Path
from urllib.parse import urlparse
from . import logger as logging
from .misc import Message, boolean
if typing.TYPE_CHECKING:
from collections.abc import Iterator
from typing import Any
# pylint: disable=duplicate-code
class RelayConfig(dict):
def __init__(self, path: str):
dict.__init__(self, {})
if self.is_docker:
path = '/data/config.yaml'
self._path = Path(path).expanduser().resolve()
self.reset()
def __setitem__(self, key: str, value: Any) -> None:
if key in {'blocked_instances', 'blocked_software', 'whitelist'}:
assert isinstance(value, (list, set, tuple))
elif key in {'port', 'workers', 'json_cache', 'timeout'}:
if not isinstance(value, int):
value = int(value)
elif key == 'whitelist_enabled':
if not isinstance(value, bool):
value = boolean(value)
super().__setitem__(key, value)
@property
def db(self) -> RelayDatabase:
return Path(self['db']).expanduser().resolve()
@property
def actor(self) -> str:
return f'https://{self["host"]}/actor'
@property
def inbox(self) -> str:
return f'https://{self["host"]}/inbox'
@property
def keyid(self) -> str:
return f'{self.actor}#main-key'
@cached_property
def is_docker(self) -> bool:
return bool(os.environ.get('DOCKER_RUNNING'))
def reset(self) -> None:
self.clear()
self.update({
'db': str(self._path.parent.joinpath(f'{self._path.stem}.jsonld')),
'listen': '0.0.0.0',
'port': 8080,
'note': 'Make a note about your instance here.',
'push_limit': 512,
'json_cache': 1024,
'timeout': 10,
'workers': 0,
'host': 'relay.example.com',
'whitelist_enabled': False,
'blocked_software': [],
'blocked_instances': [],
'whitelist': []
})
def load(self) -> None:
self.reset()
options = {}
try:
options['Loader'] = yaml.FullLoader
except AttributeError:
pass
try:
with self._path.open('r', encoding = 'UTF-8') as fd:
config = yaml.load(fd, **options)
except FileNotFoundError:
return
if not config:
return
for key, value in config.items():
if key == 'ap':
for k, v in value.items():
if k not in self:
continue
self[k] = v
continue
if key not in self:
continue
self[key] = value
class RelayDatabase(dict):
def __init__(self, config: RelayConfig):
dict.__init__(self, {
'relay-list': {},
'private-key': None,
'follow-requests': {},
'version': 1
})
self.config = config
self.signer = None
@property
def hostnames(self) -> tuple[str]:
return tuple(self['relay-list'].keys())
@property
def inboxes(self) -> tuple[dict[str, str]]:
return tuple(data['inbox'] for data in self['relay-list'].values())
def load(self) -> None:
try:
with self.config.db.open() as fd:
data = json.load(fd)
self['version'] = data.get('version', None)
self['private-key'] = data.get('private-key')
if self['version'] is None:
self['version'] = 1
if 'actorKeys' in data:
self['private-key'] = data['actorKeys']['privateKey']
for item in data.get('relay-list', []):
domain = urlparse(item).hostname
self['relay-list'][domain] = {
'domain': domain,
'inbox': item,
'followid': None
}
else:
self['relay-list'] = data.get('relay-list', {})
for domain, instance in self['relay-list'].items():
if not instance.get('domain'):
instance['domain'] = domain
except FileNotFoundError:
pass
except json.decoder.JSONDecodeError as e:
if self.config.db.stat().st_size > 0:
raise e from None
def save(self) -> None:
with self.config.db.open('w', encoding = 'UTF-8') as fd:
json.dump(self, fd, indent=4)
def get_inbox(self, domain: str, fail: bool = False) -> dict[str, str] | None:
if domain.startswith('http'):
domain = urlparse(domain).hostname
if (inbox := self['relay-list'].get(domain)):
return inbox
if fail:
raise KeyError(domain)
return None
def add_inbox(self,
inbox: str,
followid: str | None = None,
software: str | None = None) -> dict[str, str]:
assert inbox.startswith('https'), 'Inbox must be a url'
domain = urlparse(inbox).hostname
if (instance := self.get_inbox(domain)):
if followid:
instance['followid'] = followid
if software:
instance['software'] = software
return instance
self['relay-list'][domain] = {
'domain': domain,
'inbox': inbox,
'followid': followid,
'software': software
}
logging.verbose('Added inbox to database: %s', inbox)
return self['relay-list'][domain]
def del_inbox(self,
domain: str,
followid: str = None,
fail: bool = False) -> bool:
if not (data := self.get_inbox(domain, fail=False)):
if fail:
raise KeyError(domain)
return False
if not data['followid'] or not followid or data['followid'] == followid:
del self['relay-list'][data['domain']]
logging.verbose('Removed inbox from database: %s', data['inbox'])
return True
if fail:
raise ValueError('Follow IDs do not match')
logging.debug('Follow ID does not match: db = %s, object = %s', data['followid'], followid)
return False
def get_request(self, domain: str, fail: bool = True) -> dict[str, str] | None:
if domain.startswith('http'):
domain = urlparse(domain).hostname
try:
return self['follow-requests'][domain]
except KeyError as e:
if fail:
raise e
return None
def add_request(self, actor: str, inbox: str, followid: str) -> None:
domain = urlparse(inbox).hostname
try:
request = self.get_request(domain)
request['followid'] = followid
except KeyError:
pass
self['follow-requests'][domain] = {
'actor': actor,
'inbox': inbox,
'followid': followid
}
def del_request(self, domain: str) -> None:
if domain.startswith('http'):
domain = urlparse(domain).hostname
del self['follow-requests'][domain]
def distill_inboxes(self, message: Message) -> Iterator[str]:
src_domains = {
message.domain,
urlparse(message.object_id).netloc
}
for domain, instance in self['relay-list'].items():
if domain not in src_domains:
yield instance['inbox']

193
relay/config.py Normal file
View file

@ -0,0 +1,193 @@
from __future__ import annotations
import getpass
import os
import typing
import yaml
from pathlib import Path
from .misc import IS_DOCKER
if typing.TYPE_CHECKING:
from typing import Any
DEFAULTS: dict[str, Any] = {
'listen': '0.0.0.0',
'port': 8080,
'domain': 'relay.example.com',
'workers': len(os.sched_getaffinity(0)),
'db_type': 'sqlite',
'ca_type': 'database',
'sq_path': 'relay.sqlite3',
'pg_host': '/var/run/postgresql',
'pg_port': 5432,
'pg_user': getpass.getuser(),
'pg_pass': None,
'pg_name': 'activityrelay',
'rd_host': 'localhost',
'rd_port': 6379,
'rd_user': None,
'rd_pass': None,
'rd_database': 0,
'rd_prefix': 'activityrelay'
}
if IS_DOCKER:
DEFAULTS['sq_path'] = '/data/relay.jsonld'
class Config:
def __init__(self, path: str, load: bool = False):
self.path = Path(path).expanduser().resolve()
self.listen = None
self.port = None
self.domain = None
self.workers = None
self.db_type = None
self.ca_type = None
self.sq_path = None
self.pg_host = None
self.pg_port = None
self.pg_user = None
self.pg_pass = None
self.pg_name = None
self.rd_host = None
self.rd_port = None
self.rd_user = None
self.rd_pass = None
self.rd_database = None
self.rd_prefix = None
if load:
try:
self.load()
except FileNotFoundError:
self.save()
@property
def sqlite_path(self) -> Path:
if not os.path.isabs(self.sq_path):
return self.path.parent.joinpath(self.sq_path).resolve()
return Path(self.sq_path).expanduser().resolve()
@property
def actor(self) -> str:
return f'https://{self.domain}/actor'
@property
def inbox(self) -> str:
return f'https://{self.domain}/inbox'
@property
def keyid(self) -> str:
return f'{self.actor}#main-key'
def load(self) -> None:
self.reset()
options = {}
try:
options['Loader'] = yaml.FullLoader
except AttributeError:
pass
with self.path.open('r', encoding = 'UTF-8') as fd:
config = yaml.load(fd, **options)
pgcfg = config.get('postgresql', {})
rdcfg = config.get('redis', {})
if not config:
raise ValueError('Config is empty')
if IS_DOCKER:
self.listen = '0.0.0.0'
self.port = 8080
self.sq_path = '/data/relay.jsonld'
else:
self.set('listen', config.get('listen', DEFAULTS['listen']))
self.set('port', config.get('port', DEFAULTS['port']))
self.set('sq_path', config.get('sqlite_path', DEFAULTS['sq_path']))
self.set('workers', config.get('workers', DEFAULTS['workers']))
self.set('domain', config.get('domain', DEFAULTS['domain']))
self.set('db_type', config.get('database_type', DEFAULTS['db_type']))
self.set('ca_type', config.get('cache_type', DEFAULTS['ca_type']))
for key in DEFAULTS:
if key.startswith('pg'):
try:
self.set(key, pgcfg[key[3:]])
except KeyError:
continue
elif key.startswith('rd'):
try:
self.set(key, rdcfg[key[3:]])
except KeyError:
continue
def reset(self) -> None:
for key, value in DEFAULTS.items():
setattr(self, key, value)
def save(self) -> None:
self.path.parent.mkdir(exist_ok = True, parents = True)
config = {
'listen': self.listen,
'port': self.port,
'domain': self.domain,
'workers': self.workers,
'database_type': self.db_type,
'cache_type': self.ca_type,
'sqlite_path': self.sq_path,
'postgres': {
'host': self.pg_host,
'port': self.pg_port,
'user': self.pg_user,
'pass': self.pg_pass,
'name': self.pg_name
},
'redis': {
'host': self.rd_host,
'port': self.rd_port,
'user': self.rd_user,
'pass': self.rd_pass,
'database': self.rd_database,
'refix': self.rd_prefix
}
}
with self.path.open('w', encoding = 'utf-8') as fd:
yaml.dump(config, fd, sort_keys = False)
def set(self, key: str, value: Any) -> None:
if key not in DEFAULTS:
raise KeyError(key)
if key in {'port', 'pg_port', 'workers'} and not isinstance(value, int):
value = int(value)
setattr(self, key, value)

140
relay/data/statements.sql Normal file
View file

@ -0,0 +1,140 @@
-- name: get-config
SELECT * FROM config WHERE key = :key
-- name: get-config-all
SELECT * FROM config
-- name: put-config
INSERT INTO config (key, value, type)
VALUES (:key, :value, :type)
ON CONFLICT (key) DO UPDATE SET value = :value
RETURNING *
-- name: del-config
DELETE FROM config
WHERE key = :key
-- name: get-inbox
SELECT * FROM inboxes WHERE domain = :value or inbox = :value or actor = :value
-- name: put-inbox
INSERT INTO inboxes (domain, actor, inbox, followid, software, created)
VALUES (:domain, :actor, :inbox, :followid, :software, :created)
ON CONFLICT (domain) DO UPDATE SET followid = :followid
RETURNING *
-- name: del-inbox
DELETE FROM inboxes
WHERE domain = :value or inbox = :value or actor = :value
-- name: get-software-ban
SELECT * FROM software_bans WHERE name = :name
-- name: put-software-ban
INSERT INTO software_bans (name, reason, note, created)
VALUES (:name, :reason, :note, :created)
RETURNING *
-- name: del-software-ban
DELETE FROM software_bans
WHERE name = :name
-- name: get-domain-ban
SELECT * FROM domain_bans WHERE domain = :domain
-- name: put-domain-ban
INSERT INTO domain_bans (domain, reason, note, created)
VALUES (:domain, :reason, :note, :created)
RETURNING *
-- name: del-domain-ban
DELETE FROM domain_bans
WHERE domain = :domain
-- name: get-domain-whitelist
SELECT * FROM whitelist WHERE domain = :domain
-- name: put-domain-whitelist
INSERT INTO whitelist (domain, created)
VALUES (:domain, :created)
RETURNING *
-- name: del-domain-whitelist
DELETE FROM whitelist
WHERE domain = :domain
-- cache functions --
-- name: create-cache-table-sqlite
CREATE TABLE IF NOT EXISTS cache (
id INTEGER PRIMARY KEY UNIQUE,
namespace TEXT NOT NULL,
key TEXT NOT NULL,
"value" TEXT,
type TEXT DEFAULT 'str',
updated TIMESTAMP NOT NULL,
UNIQUE(namespace, key)
)
-- name: create-cache-table-postgres
CREATE TABLE IF NOT EXISTS cache (
id SERIAL PRIMARY KEY,
namespace TEXT NOT NULL,
key TEXT NOT NULL,
"value" TEXT,
type TEXT DEFAULT 'str',
updated TIMESTAMP NOT NULL,
UNIQUE(namespace, key)
)
-- name: get-cache-item
SELECT * FROM cache
WHERE namespace = :namespace and key = :key
-- name: get-cache-keys
SELECT key FROM cache
WHERE namespace = :namespace
-- name: get-cache-namespaces
SELECT DISTINCT namespace FROM cache
-- name: set-cache-item
INSERT INTO cache (namespace, key, value, type, updated)
VALUES (:namespace, :key, :value, :type, :date)
ON CONFLICT (namespace, key) DO
UPDATE SET value = :value, type = :type, updated = :date
RETURNING *
-- name: del-cache-item
DELETE FROM cache
WHERE namespace = :namespace and key = :key
-- name: del-cache-namespace
DELETE FROM cache
WHERE namespace = :namespace
-- name: del-cache-all
DELETE FROM cache

View file

@ -1,43 +0,0 @@
import asyncio
import logging
import urllib.parse
import simplejson as json
from sys import exit
from . import CONFIG
AP_CONFIG = CONFIG['ap']
try:
with open(CONFIG['db']) as f:
DATABASE = json.load(f)
except FileNotFoundError:
logging.info('No database was found, making a new one.')
DATABASE = {}
except json.decoder.JSONDecodeError:
logging.info('Invalid JSON in db. Exiting...')
exit(1)
following = DATABASE.get('relay-list', [])
for inbox in following:
if urllib.parse.urlsplit(inbox).hostname in AP_CONFIG['blocked_instances']:
following.remove(inbox)
elif AP_CONFIG['whitelist_enabled'] is True and urllib.parse.urlsplit(inbox).hostname not in AP_CONFIG['whitelist']:
following.remove(inbox)
DATABASE['relay-list'] = following
if 'actors' in DATABASE:
DATABASE.pop('actors')
async def database_save():
while True:
with open(CONFIG['db'], 'w') as f:
json.dump(DATABASE, f)
await asyncio.sleep(30)
asyncio.ensure_future(database_save())

View file

@ -0,0 +1,69 @@
from __future__ import annotations
import tinysql
import typing
from .config import get_default_value
from .connection import Connection
from .schema import VERSIONS, migrate_0
from .. import logger as logging
try:
from importlib.resources import files as pkgfiles
except ImportError:
from importlib_resources import files as pkgfiles
if typing.TYPE_CHECKING:
from .config import Config
def get_database(config: Config, migrate: bool = True) -> tinysql.Database:
if config.db_type == "sqlite":
db = tinysql.Database.sqlite(
config.sqlite_path,
connection_class = Connection,
min_connections = 2,
max_connections = 10
)
elif config.db_type == "postgres":
db = tinysql.Database.postgres(
config.pg_name,
config.pg_host,
config.pg_port,
config.pg_user,
config.pg_pass,
connection_class = Connection
)
db.load_prepared_statements(pkgfiles("relay").joinpath("data", "statements.sql"))
if not migrate:
return db
with db.connection() as conn:
if 'config' not in conn.get_tables():
logging.info("Creating database tables")
migrate_0(conn)
return db
if (schema_ver := conn.get_config('schema-version')) < get_default_value('schema-version'):
logging.info("Migrating database from version '%i'", schema_ver)
for ver, func in VERSIONS:
if schema_ver < ver:
conn.begin()
func(conn)
conn.put_config('schema-version', ver)
conn.commit()
if (privkey := conn.get_config('private-key')):
conn.app.signer = privkey
logging.set_level(conn.get_config('log-level'))
return db

45
relay/database/config.py Normal file
View file

@ -0,0 +1,45 @@
from __future__ import annotations
import typing
from .. import logger as logging
from ..misc import boolean
if typing.TYPE_CHECKING:
from collections.abc import Callable
from typing import Any
CONFIG_DEFAULTS: dict[str, tuple[str, Any]] = {
'schema-version': ('int', 20240119),
'log-level': ('loglevel', logging.LogLevel.INFO),
'note': ('str', 'Make a note about your instance here.'),
'private-key': ('str', None),
'whitelist-enabled': ('bool', False)
}
# serializer | deserializer
CONFIG_CONVERT: dict[str, tuple[Callable, Callable]] = {
'str': (str, str),
'int': (str, int),
'bool': (str, boolean),
'loglevel': (lambda x: x.name, logging.LogLevel.parse)
}
def get_default_value(key: str) -> Any:
return CONFIG_DEFAULTS[key][1]
def get_default_type(key: str) -> str:
return CONFIG_DEFAULTS[key][0]
def serialize(key: str, value: Any) -> str:
type_name = get_default_type(key)
return CONFIG_CONVERT[type_name][0](value)
def deserialize(key: str, value: str) -> Any:
type_name = get_default_type(key)
return CONFIG_CONVERT[type_name][1](value)

View file

@ -0,0 +1,296 @@
from __future__ import annotations
import tinysql
import typing
from datetime import datetime, timezone
from urllib.parse import urlparse
from .config import CONFIG_DEFAULTS, get_default_type, get_default_value, serialize, deserialize
from .. import logger as logging
from ..misc import get_app
if typing.TYPE_CHECKING:
from collections.abc import Iterator
from tinysql import Cursor, Row
from typing import Any
from .application import Application
from ..misc import Message
RELAY_SOFTWARE = [
'activityrelay', # https://git.pleroma.social/pleroma/relay
'activity-relay', # https://github.com/yukimochi/Activity-Relay
'aoderelay', # https://git.asonix.dog/asonix/relay
'feditools-relay' # https://git.ptzo.gdn/feditools/relay
]
class Connection(tinysql.Connection):
@property
def app(self) -> Application:
return get_app()
def distill_inboxes(self, message: Message) -> Iterator[str]:
src_domains = {
message.domain,
urlparse(message.object_id).netloc
}
for inbox in self.execute('SELECT * FROM inboxes'):
if inbox['domain'] not in src_domains:
yield inbox['inbox']
def exec_statement(self, name: str, params: dict[str, Any] | None = None) -> Cursor:
return self.execute(self.database.prepared_statements[name], params)
def get_config(self, key: str) -> Any:
if key not in CONFIG_DEFAULTS:
raise KeyError(key)
with self.exec_statement('get-config', {'key': key}) as cur:
if not (row := cur.one()):
return get_default_value(key)
if row['value']:
return deserialize(row['key'], row['value'])
return None
def get_config_all(self) -> dict[str, Any]:
with self.exec_statement('get-config-all') as cur:
db_config = {row['key']: row['value'] for row in cur}
config = {}
for key, data in CONFIG_DEFAULTS.items():
try:
config[key] = deserialize(key, db_config[key])
except KeyError:
if key == 'schema-version':
config[key] = 0
else:
config[key] = data[1]
return config
def put_config(self, key: str, value: Any) -> Any:
if key not in CONFIG_DEFAULTS:
raise KeyError(key)
if key == 'private-key':
self.app.signer = value
elif key == 'log-level':
value = logging.LogLevel.parse(value)
logging.set_level(value)
params = {
'key': key,
'value': serialize(key, value) if value is not None else None,
'type': get_default_type(key)
}
with self.exec_statement('put-config', params):
return value
def get_inbox(self, value: str) -> Row:
with self.exec_statement('get-inbox', {'value': value}) as cur:
return cur.one()
def put_inbox(self,
domain: str,
inbox: str,
actor: str | None = None,
followid: str | None = None,
software: str | None = None) -> Row:
params = {
'domain': domain,
'inbox': inbox,
'actor': actor,
'followid': followid,
'software': software,
'created': datetime.now(tz = timezone.utc)
}
with self.exec_statement('put-inbox', params) as cur:
return cur.one()
def update_inbox(self,
inbox: str,
actor: str | None = None,
followid: str | None = None,
software: str | None = None) -> Row:
if not (actor or followid or software):
raise ValueError('Missing "actor", "followid", and/or "software"')
data = {}
if actor:
data['actor'] = actor
if followid:
data['followid'] = followid
if software:
data['software'] = software
statement = tinysql.Update('inboxes', data, inbox = inbox)
with self.query(statement):
return self.get_inbox(inbox)
def del_inbox(self, value: str) -> bool:
with self.exec_statement('del-inbox', {'value': value}) as cur:
if cur.modified_row_count > 1:
raise ValueError('More than one row was modified')
return cur.modified_row_count == 1
def get_domain_ban(self, domain: str) -> Row:
if domain.startswith('http'):
domain = urlparse(domain).netloc
with self.exec_statement('get-domain-ban', {'domain': domain}) as cur:
return cur.one()
def put_domain_ban(self,
domain: str,
reason: str | None = None,
note: str | None = None) -> Row:
params = {
'domain': domain,
'reason': reason,
'note': note,
'created': datetime.now(tz = timezone.utc)
}
with self.exec_statement('put-domain-ban', params) as cur:
return cur.one()
def update_domain_ban(self,
domain: str,
reason: str | None = None,
note: str | None = None) -> tinysql.Row:
if not (reason or note):
raise ValueError('"reason" and/or "note" must be specified')
params = {}
if reason:
params['reason'] = reason
if note:
params['note'] = note
statement = tinysql.Update('domain_bans', params, domain = domain)
with self.query(statement) as cur:
if cur.modified_row_count > 1:
raise ValueError('More than one row was modified')
return self.get_domain_ban(domain)
def del_domain_ban(self, domain: str) -> bool:
with self.exec_statement('del-domain-ban', {'domain': domain}) as cur:
if cur.modified_row_count > 1:
raise ValueError('More than one row was modified')
return cur.modified_row_count == 1
def get_software_ban(self, name: str) -> Row:
with self.exec_statement('get-software-ban', {'name': name}) as cur:
return cur.one()
def put_software_ban(self,
name: str,
reason: str | None = None,
note: str | None = None) -> Row:
params = {
'name': name,
'reason': reason,
'note': note,
'created': datetime.now(tz = timezone.utc)
}
with self.exec_statement('put-software-ban', params) as cur:
return cur.one()
def update_software_ban(self,
name: str,
reason: str | None = None,
note: str | None = None) -> tinysql.Row:
if not (reason or note):
raise ValueError('"reason" and/or "note" must be specified')
params = {}
if reason:
params['reason'] = reason
if note:
params['note'] = note
statement = tinysql.Update('software_bans', params, name = name)
with self.query(statement) as cur:
if cur.modified_row_count > 1:
raise ValueError('More than one row was modified')
return self.get_software_ban(name)
def del_software_ban(self, name: str) -> bool:
with self.exec_statement('del-software-ban', {'name': name}) as cur:
if cur.modified_row_count > 1:
raise ValueError('More than one row was modified')
return cur.modified_row_count == 1
def get_domain_whitelist(self, domain: str) -> Row:
with self.exec_statement('get-domain-whitelist', {'domain': domain}) as cur:
return cur.one()
def put_domain_whitelist(self, domain: str) -> Row:
params = {
'domain': domain,
'created': datetime.now(tz = timezone.utc)
}
with self.exec_statement('put-domain-whitelist', params) as cur:
return cur.one()
def del_domain_whitelist(self, domain: str) -> bool:
with self.exec_statement('del-domain-whitelist', {'domain': domain}) as cur:
if cur.modified_row_count > 1:
raise ValueError('More than one row was modified')
return cur.modified_row_count == 1

60
relay/database/schema.py Normal file
View file

@ -0,0 +1,60 @@
from __future__ import annotations
import typing
from tinysql import Column, Connection, Table
from .config import get_default_value
if typing.TYPE_CHECKING:
from collections.abc import Callable
VERSIONS: list[Callable] = []
TABLES: list[Table] = [
Table(
'config',
Column('key', 'text', primary_key = True, unique = True, nullable = False),
Column('value', 'text'),
Column('type', 'text', default = 'str')
),
Table(
'inboxes',
Column('domain', 'text', primary_key = True, unique = True, nullable = False),
Column('actor', 'text', unique = True),
Column('inbox', 'text', unique = True, nullable = False),
Column('followid', 'text'),
Column('software', 'text'),
Column('created', 'timestamp', nullable = False)
),
Table(
'whitelist',
Column('domain', 'text', primary_key = True, unique = True, nullable = True),
Column('created', 'timestamp')
),
Table(
'domain_bans',
Column('domain', 'text', primary_key = True, unique = True, nullable = True),
Column('reason', 'text'),
Column('note', 'text'),
Column('created', 'timestamp', nullable = False)
),
Table(
'software_bans',
Column('name', 'text', primary_key = True, unique = True, nullable = True),
Column('reason', 'text'),
Column('note', 'text'),
Column('created', 'timestamp', nullable = False)
)
]
def version(func: Callable) -> Callable:
ver = int(func.replace('migrate_', ''))
VERSIONS[ver] = func
return func
def migrate_0(conn: Connection) -> None:
conn.create_tables(TABLES)
conn.put_config('schema-version', get_default_value('schema-version'))

View file

@ -1,36 +0,0 @@
import aiohttp.web
import urllib.parse
from . import app, CONFIG
from .database import DATABASE
host = CONFIG['ap']['host']
note = CONFIG['note']
inboxes = DATABASE.get('relay-list', [])
async def default(request):
targets = '<br>'.join([urllib.parse.urlsplit(target).hostname for target in inboxes])
return aiohttp.web.Response(
status=200,
content_type="text/html",
charset="utf-8",
text="""
<html><head>
<title>ActivityPub Relay at {host}</title>
<style>
p {{ color: #FFFFFF; font-family: monospace, arial; font-size: 100%; }}
body {{ background-color: #000000; }}
</style>
</head>
<body>
<p>This is an Activity Relay for fediverse instances.</p>
<p>{note}</p>
<p>For Mastodon and Misskey instances, you may subscribe to this relay with the address: <a href="https://{host}/inbox">https://{host}/inbox</a></p>
<p>For Pleroma and other instances, you may subscribe to this relay with the address: <a href="https://{host}/actor">https://{host}/actor</a></p>
<p>To host your own relay, you may download the code at this address: <a href="https://git.pleroma.social/pleroma/relay">https://git.pleroma.social/pleroma/relay</a></p>
<br><p>List of {count} registered instances:<br>{targets}</p>
</body></html>
""".format(host=host, note=note,targets=targets,count=len(inboxes)))
app.router.add_get('/', default)

234
relay/http_client.py Normal file
View file

@ -0,0 +1,234 @@
from __future__ import annotations
import json
import traceback
import typing
from aiohttp import ClientSession, ClientTimeout, TCPConnector
from aiohttp.client_exceptions import ClientConnectionError, ClientSSLError
from asyncio.exceptions import TimeoutError as AsyncTimeoutError
from aputils.objects import Nodeinfo, WellKnownNodeinfo
from json.decoder import JSONDecodeError
from urllib.parse import urlparse
from . import __version__
from . import logger as logging
from .misc import MIMETYPES, Message, get_app
if typing.TYPE_CHECKING:
from aputils import Signer
from tinysql import Row
from typing import Any
from .application import Application
from .cache import Cache
HEADERS = {
'Accept': f'{MIMETYPES["activity"]}, {MIMETYPES["json"]};q=0.9',
'User-Agent': f'ActivityRelay/{__version__}'
}
class HttpClient:
def __init__(self, limit: int = 100, timeout: int = 10):
self.limit = limit
self.timeout = timeout
self._conn = None
self._session = None
async def __aenter__(self) -> HttpClient:
await self.open()
return self
async def __aexit__(self, *_: Any) -> None:
await self.close()
@property
def app(self) -> Application:
return get_app()
@property
def cache(self) -> Cache:
return self.app.cache
@property
def signer(self) -> Signer:
return self.app.signer
async def open(self) -> None:
if self._session:
return
self._conn = TCPConnector(
limit = self.limit,
ttl_dns_cache = 300,
)
self._session = ClientSession(
connector = self._conn,
headers = HEADERS,
connector_owner = True,
timeout = ClientTimeout(total=self.timeout)
)
async def close(self) -> None:
if not self._session:
return
await self._session.close()
await self._conn.close()
self._conn = None
self._session = None
async def get(self, # pylint: disable=too-many-branches
url: str,
sign_headers: bool = False,
loads: callable = json.loads,
force: bool = False) -> dict | None:
await self.open()
try:
url, _ = url.split('#', 1)
except ValueError:
pass
if not force:
try:
item = self.cache.get('request', url)
if not item.older_than(48):
return loads(item.value)
except KeyError:
logging.verbose('Failed to fetch cached data for url: %s', url)
headers = {}
if sign_headers:
self.signer.sign_headers('GET', url, algorithm = 'original')
try:
logging.debug('Fetching resource: %s', url)
async with self._session.get(url, headers=headers) as resp:
## Not expecting a response with 202s, so just return
if resp.status == 202:
return None
data = await resp.read()
if resp.status != 200:
logging.verbose('Received error when requesting %s: %i', url, resp.status)
logging.debug(await resp.read())
return None
message = loads(data)
self.cache.set('request', url, data.decode('utf-8'), 'str')
logging.debug('%s >> resp %s', url, json.dumps(message, indent = 4))
return message
except JSONDecodeError:
logging.verbose('Failed to parse JSON')
return None
except ClientSSLError:
logging.verbose('SSL error when connecting to %s', urlparse(url).netloc)
except (AsyncTimeoutError, ClientConnectionError):
logging.verbose('Failed to connect to %s', urlparse(url).netloc)
except Exception:
traceback.print_exc()
return None
async def post(self, url: str, message: Message, instance: Row | None = None) -> None:
await self.open()
## Using the old algo by default is probably a better idea right now
# pylint: disable=consider-ternary-expression
if instance and instance['software'] in {'mastodon'}:
algorithm = 'hs2019'
else:
algorithm = 'original'
# pylint: enable=consider-ternary-expression
headers = {'Content-Type': 'application/activity+json'}
headers.update(get_app().signer.sign_headers('POST', url, message, algorithm=algorithm))
try:
logging.verbose('Sending "%s" to %s', message.type, url)
async with self._session.post(url, headers=headers, data=message.to_json()) as resp:
# Not expecting a response, so just return
if resp.status in {200, 202}:
logging.verbose('Successfully sent "%s" to %s', message.type, url)
return
logging.verbose('Received error when pushing to %s: %i', url, resp.status)
logging.debug(await resp.read())
return
except ClientSSLError:
logging.warning('SSL error when pushing to %s', urlparse(url).netloc)
except (AsyncTimeoutError, ClientConnectionError):
logging.warning('Failed to connect to %s for message push', urlparse(url).netloc)
# prevent workers from being brought down
except Exception:
traceback.print_exc()
async def fetch_nodeinfo(self, domain: str) -> Nodeinfo | None:
nodeinfo_url = None
wk_nodeinfo = await self.get(
f'https://{domain}/.well-known/nodeinfo',
loads = WellKnownNodeinfo.parse
)
if not wk_nodeinfo:
logging.verbose('Failed to fetch well-known nodeinfo url for %s', domain)
return None
for version in ('20', '21'):
try:
nodeinfo_url = wk_nodeinfo.get_url(version)
except KeyError:
pass
if not nodeinfo_url:
logging.verbose('Failed to fetch nodeinfo url for %s', domain)
return None
return await self.get(nodeinfo_url, loads = Nodeinfo.parse) or None
async def get(*args: Any, **kwargs: Any) -> Message | dict | None:
async with HttpClient() as client:
return await client.get(*args, **kwargs)
async def post(*args: Any, **kwargs: Any) -> None:
async with HttpClient() as client:
return await client.post(*args, **kwargs)
async def fetch_nodeinfo(*args: Any, **kwargs: Any) -> Nodeinfo | None:
async with HttpClient() as client:
return await client.fetch_nodeinfo(*args, **kwargs)

View file

@ -1,66 +0,0 @@
import logging
import aiohttp
import aiohttp.web
from collections import defaultdict
STATS = {
'requests': defaultdict(int),
'response_codes': defaultdict(int),
'response_codes_per_domain': defaultdict(lambda: defaultdict(int)),
'delivery_codes': defaultdict(int),
'delivery_codes_per_domain': defaultdict(lambda: defaultdict(int)),
'exceptions': defaultdict(int),
'exceptions_per_domain': defaultdict(lambda: defaultdict(int)),
'delivery_exceptions': defaultdict(int),
'delivery_exceptions_per_domain': defaultdict(lambda: defaultdict(int))
}
async def on_request_start(session, trace_config_ctx, params):
global STATS
logging.debug("HTTP START [%r], [%r]", session, params)
STATS['requests'][params.url.host] += 1
async def on_request_end(session, trace_config_ctx, params):
global STATS
logging.debug("HTTP END [%r], [%r]", session, params)
host = params.url.host
status = params.response.status
STATS['response_codes'][status] += 1
STATS['response_codes_per_domain'][host][status] += 1
if params.method == 'POST':
STATS['delivery_codes'][status] += 1
STATS['delivery_codes_per_domain'][host][status] += 1
async def on_request_exception(session, trace_config_ctx, params):
global STATS
logging.debug("HTTP EXCEPTION [%r], [%r]", session, params)
host = params.url.host
exception = repr(params.exception)
STATS['exceptions'][exception] += 1
STATS['exceptions_per_domain'][host][exception] += 1
if params.method == 'POST':
STATS['delivery_exceptions'][exception] += 1
STATS['delivery_exceptions_per_domain'][host][exception] += 1
def http_debug():
trace_config = aiohttp.TraceConfig()
trace_config.on_request_start.append(on_request_start)
trace_config.on_request_end.append(on_request_end)
trace_config.on_request_exception.append(on_request_exception)
return trace_config

View file

@ -1,148 +0,0 @@
import aiohttp
import aiohttp.web
import base64
import logging
from Crypto.PublicKey import RSA
from Crypto.Hash import SHA, SHA256, SHA512
from Crypto.Signature import PKCS1_v1_5
from cachetools import LFUCache
from async_lru import alru_cache
from .remote_actor import fetch_actor
HASHES = {
'sha1': SHA,
'sha256': SHA256,
'sha512': SHA512
}
def split_signature(sig):
default = {"headers": "date"}
sig = sig.strip().split(',')
for chunk in sig:
k, _, v = chunk.partition('=')
v = v.strip('\"')
default[k] = v
default['headers'] = default['headers'].split()
return default
def build_signing_string(headers, used_headers):
return '\n'.join(map(lambda x: ': '.join([x.lower(), headers[x]]), used_headers))
SIGSTRING_CACHE = LFUCache(1024)
def sign_signing_string(sigstring, key):
if sigstring in SIGSTRING_CACHE:
return SIGSTRING_CACHE[sigstring]
pkcs = PKCS1_v1_5.new(key)
h = SHA256.new()
h.update(sigstring.encode('ascii'))
sigdata = pkcs.sign(h)
sigdata = base64.b64encode(sigdata)
SIGSTRING_CACHE[sigstring] = sigdata.decode('ascii')
return SIGSTRING_CACHE[sigstring]
def generate_body_digest(body):
bodyhash = SIGSTRING_CACHE.get(body)
if not bodyhash:
h = SHA256.new(body.encode('utf-8'))
bodyhash = base64.b64encode(h.digest()).decode('utf-8')
SIGSTRING_CACHE[body] = bodyhash
return bodyhash
def sign_headers(headers, key, key_id):
headers = {x.lower(): y for x, y in headers.items()}
used_headers = headers.keys()
sig = {
'keyId': key_id,
'algorithm': 'rsa-sha256',
'headers': ' '.join(used_headers)
}
sigstring = build_signing_string(headers, used_headers)
sig['signature'] = sign_signing_string(sigstring, key)
chunks = ['{}="{}"'.format(k, v) for k, v in sig.items()]
return ','.join(chunks)
@alru_cache(maxsize=16384)
async def fetch_actor_key(actor):
actor_data = await fetch_actor(actor)
if not actor_data:
return None
try:
return RSA.importKey(actor_data['publicKey']['publicKeyPem'])
except Exception as e:
logging.debug(f'Exception occured while fetching actor key: {e}')
async def validate(actor, request):
pubkey = await fetch_actor_key(actor)
if not pubkey:
return False
logging.debug('actor key: %r', pubkey)
headers = request.headers.copy()
headers['(request-target)'] = ' '.join([request.method.lower(), request.path])
sig = split_signature(headers['signature'])
logging.debug('sigdata: %r', sig)
sigstring = build_signing_string(headers, sig['headers'])
logging.debug('sigstring: %r', sigstring)
sign_alg, _, hash_alg = sig['algorithm'].partition('-')
logging.debug('sign alg: %r, hash alg: %r', sign_alg, hash_alg)
sigdata = base64.b64decode(sig['signature'])
pkcs = PKCS1_v1_5.new(pubkey)
h = HASHES[hash_alg].new()
h.update(sigstring.encode('ascii'))
result = pkcs.verify(h, sigdata)
request['validated'] = result
logging.debug('validates? %r', result)
return result
async def http_signatures_middleware(app, handler):
async def http_signatures_handler(request):
request['validated'] = False
if 'signature' in request.headers and request.method == 'POST':
data = await request.json()
if 'actor' not in data:
raise aiohttp.web.HTTPUnauthorized(body='signature check failed, no actor in message')
actor = data["actor"]
if not (await validate(actor, request)):
logging.info('Signature validation failed for: %r', actor)
raise aiohttp.web.HTTPUnauthorized(body='signature check failed, signature did not match key')
return (await handler(request))
return (await handler(request))
return http_signatures_handler

View file

@ -1,11 +0,0 @@
import aiohttp.web
from . import app
from .http_debug import STATS
async def stats(request):
return aiohttp.web.json_response(STATS)
app.router.add_get('/stats', stats)

92
relay/logger.py Normal file
View file

@ -0,0 +1,92 @@
from __future__ import annotations
import logging
import os
import typing
from enum import IntEnum
from pathlib import Path
if typing.TYPE_CHECKING:
from collections.abc import Callable
from typing import Any
class LogLevel(IntEnum):
DEBUG = logging.DEBUG
VERBOSE = 15
INFO = logging.INFO
WARNING = logging.WARNING
ERROR = logging.ERROR
CRITICAL = logging.CRITICAL
def __str__(self) -> str:
return self.name
@classmethod
def parse(cls: type[IntEnum], data: object) -> IntEnum:
if isinstance(data, cls):
return data
if isinstance(data, str):
data = data.upper()
try:
return cls[data]
except KeyError:
pass
try:
return cls(data)
except ValueError:
pass
raise AttributeError(f'Invalid enum property for {cls.__name__}: {data}')
def get_level() -> LogLevel:
return LogLevel.parse(logging.root.level)
def set_level(level: LogLevel | str) -> None:
logging.root.setLevel(LogLevel.parse(level))
def verbose(message: str, *args: Any, **kwargs: Any) -> None:
if not logging.root.isEnabledFor(LogLevel['VERBOSE']):
return
logging.log(LogLevel['VERBOSE'], message, *args, **kwargs)
debug: Callable = logging.debug
info: Callable = logging.info
warning: Callable = logging.warning
error: Callable = logging.error
critical: Callable = logging.critical
env_log_level = os.environ.get('LOG_LEVEL', 'INFO').upper()
try:
env_log_file = Path(os.environ['LOG_FILE']).expanduser().resolve()
except KeyError:
env_log_file = None
handlers = [logging.StreamHandler()]
if env_log_file:
handlers.append(logging.FileHandler(env_log_file))
logging.addLevelName(LogLevel['VERBOSE'], 'VERBOSE')
logging.basicConfig(
level = LogLevel.INFO,
format = '[%(asctime)s] %(levelname)s: %(message)s',
datefmt = '%Y-%m-%d %H:%M:%S',
handlers = handlers
)

View file

@ -1,8 +0,0 @@
import logging
logging.basicConfig(
level=logging.INFO,
format="[%(asctime)s] %(levelname)s: %(message)s",
handlers=[logging.StreamHandler()]
)

View file

@ -1,83 +1,806 @@
from __future__ import annotations
import Crypto
import asyncio
import sys
import simplejson as json
import click
import platform
import typing
from .actor import follow_remote_actor, unfollow_remote_actor
from . import CONFIG
from .database import DATABASE
from aputils.signer import Signer
from pathlib import Path
from shutil import copyfile
from urllib.parse import urlparse
from . import __version__
from . import http_client as http
from . import logger as logging
from .application import Application
from .compat import RelayConfig, RelayDatabase
from .database import get_database
from .database.connection import RELAY_SOFTWARE
from .misc import IS_DOCKER, Message
if typing.TYPE_CHECKING:
from tinysql import Row
from typing import Any
def relay_list():
print('Connected to the following instances or relays:')
[print('-', relay) for relay in DATABASE['relay-list']]
# pylint: disable=unsubscriptable-object,unsupported-assignment-operation
def relay_follow():
if len(sys.argv) < 3:
print('usage: python3 -m relay.manage follow <target>')
exit()
CONFIG_IGNORE = (
'schema-version',
'private-key'
)
target = sys.argv[2]
loop = asyncio.get_event_loop()
loop.run_until_complete(follow_remote_actor(target))
print('Sent follow message to:', target)
def relay_unfollow():
if len(sys.argv) < 3:
print('usage: python3 -m relay.manage unfollow <target>')
exit()
target = sys.argv[2]
loop = asyncio.get_event_loop()
loop.run_until_complete(unfollow_remote_actor(target))
print('Sent unfollow message to:', target)
def relay_forceremove():
if len(sys.argv) < 3:
print('usage: python3 -m relay.manage force-remove <target>')
exit()
target = sys.argv[2]
following = DATABASE.get('relay-list', [])
if target in following:
following.remove(target)
DATABASE['relay-list'] = following
with open('relay.jsonld', 'w') as f:
json.dump(DATABASE, f)
print('Removed target from DB:', target)
TASKS = {
'list': relay_list,
'follow': relay_follow,
'unfollow': relay_unfollow,
'force-remove': relay_forceremove
ACTOR_FORMATS = {
'mastodon': 'https://{domain}/actor',
'akkoma': 'https://{domain}/relay',
'pleroma': 'https://{domain}/relay'
}
def usage():
print('usage: python3 -m relay.manage <task> [...]')
print('tasks:')
[print('-', task) for task in TASKS.keys()]
exit()
SOFTWARE = (
'mastodon',
'akkoma',
'pleroma',
'misskey',
'friendica',
'hubzilla',
'firefish',
'gotosocial'
)
def main():
if len(sys.argv) < 2:
usage()
def check_alphanumeric(text: str) -> str:
if not text.isalnum():
raise click.BadParameter('String not alphanumeric')
if sys.argv[1] in TASKS:
TASKS[sys.argv[1]]()
else:
usage()
return text
@click.group('cli', context_settings={'show_default': True}, invoke_without_command=True)
@click.option('--config', '-c', default='relay.yaml', help='path to the relay\'s config')
@click.version_option(version=__version__, prog_name='ActivityRelay')
@click.pass_context
def cli(ctx: click.Context, config: str) -> None:
ctx.obj = Application(config)
if not ctx.invoked_subcommand:
if ctx.obj.config.domain.endswith('example.com'):
cli_setup.callback()
else:
click.echo(
'[DEPRECATED] Running the relay without the "run" command will be removed in the ' +
'future.'
)
cli_run.callback()
@cli.command('setup')
@click.pass_context
def cli_setup(ctx: click.Context) -> None:
'Generate a new config and create the database'
while True:
ctx.obj.config.domain = click.prompt(
'What domain will the relay be hosted on?',
default = ctx.obj.config.domain
)
if not ctx.obj.config.domain.endswith('example.com'):
break
click.echo('The domain must not end with "example.com"')
if not IS_DOCKER:
ctx.obj.config.listen = click.prompt(
'Which address should the relay listen on?',
default = ctx.obj.config.listen
)
ctx.obj.config.port = click.prompt(
'What TCP port should the relay listen on?',
default = ctx.obj.config.port,
type = int
)
ctx.obj.config.db_type = click.prompt(
'Which database backend will be used?',
default = ctx.obj.config.db_type,
type = click.Choice(['postgres', 'sqlite'], case_sensitive = False)
)
if ctx.obj.config.db_type == 'sqlite':
ctx.obj.config.sq_path = click.prompt(
'Where should the database be stored?',
default = ctx.obj.config.sq_path
)
elif ctx.obj.config.db_type == 'postgres':
ctx.obj.config.pg_name = click.prompt(
'What is the name of the database?',
default = ctx.obj.config.pg_name
)
ctx.obj.config.pg_host = click.prompt(
'What IP address, hostname, or unix socket does the server listen on?',
default = ctx.obj.config.pg_host,
type = int
)
ctx.obj.config.pg_port = click.prompt(
'What port does the server listen on?',
default = ctx.obj.config.pg_port,
type = int
)
ctx.obj.config.pg_user = click.prompt(
'Which user will authenticate with the server?',
default = ctx.obj.config.pg_user
)
ctx.obj.config.pg_pass = click.prompt(
'User password',
hide_input = True,
show_default = False,
default = ctx.obj.config.pg_pass or ""
) or None
ctx.obj.config.ca_type = click.prompt(
'Which caching backend?',
default = ctx.obj.config.ca_type,
type = click.Choice(['database', 'redis'], case_sensitive = False)
)
if ctx.obj.config.ca_type == 'redis':
ctx.obj.config.rd_host = click.prompt(
'What IP address, hostname, or unix socket does the server listen on?',
default = ctx.obj.config.rd_host
)
ctx.obj.config.rd_port = click.prompt(
'What port does the server listen on?',
default = ctx.obj.config.rd_port,
type = int
)
ctx.obj.config.rd_user = click.prompt(
'Which user will authenticate with the server',
default = ctx.obj.config.rd_user
)
ctx.obj.config.rd_pass = click.prompt(
'User password',
hide_input = True,
show_default = False,
default = ctx.obj.config.rd_pass or ""
) or None
ctx.obj.config.rd_database = click.prompt(
'Which database number to use?',
default = ctx.obj.config.rd_database,
type = int
)
ctx.obj.config.rd_prefix = click.prompt(
'What text should each cache key be prefixed with?',
default = ctx.obj.config.rd_database,
type = check_alphanumeric
)
ctx.obj.config.save()
config = {
'private-key': Signer.new('n/a').export()
}
with ctx.obj.database.connection() as conn:
for key, value in config.items():
conn.put_config(key, value)
if not IS_DOCKER and click.confirm('Relay all setup! Would you like to run it now?'):
cli_run.callback()
@cli.command('run')
@click.option('--dev', '-d', is_flag = True, help = 'Enable worker reloading on code change')
@click.pass_context
def cli_run(ctx: click.Context, dev: bool = False) -> None:
'Run the relay'
if ctx.obj.config.domain.endswith('example.com') or not ctx.obj.signer:
click.echo(
'Relay is not set up. Please edit your relay config or run "activityrelay setup".'
)
return
vers_split = platform.python_version().split('.')
pip_command = 'pip3 uninstall pycrypto && pip3 install pycryptodome'
if Crypto.__version__ == '2.6.1':
if int(vers_split[1]) > 7:
click.echo(
'Error: PyCrypto is broken on Python 3.8+. Please replace it with pycryptodome ' +
'before running again. Exiting...'
)
click.echo(pip_command)
return
click.echo('Warning: PyCrypto is old and should be replaced with pycryptodome')
click.echo(pip_command)
return
ctx.obj.run(dev)
@cli.command('convert')
@click.option('--old-config', '-o', help = 'Path to the config file to convert from')
@click.pass_context
def cli_convert(ctx: click.Context, old_config: str) -> None:
'Convert an old config and jsonld database to the new format.'
old_config = Path(old_config).expanduser().resolve() if old_config else ctx.obj.config.path
backup = ctx.obj.config.path.parent.joinpath(f'{ctx.obj.config.path.stem}.backup.yaml')
if str(old_config) == str(ctx.obj.config.path) and not backup.exists():
logging.info('Created backup config @ %s', backup)
copyfile(ctx.obj.config.path, backup)
config = RelayConfig(old_config)
config.load()
database = RelayDatabase(config)
database.load()
ctx.obj.config.set('listen', config['listen'])
ctx.obj.config.set('port', config['port'])
ctx.obj.config.set('workers', config['workers'])
ctx.obj.config.set('sq_path', config['db'].replace('jsonld', 'sqlite3'))
ctx.obj.config.set('domain', config['host'])
ctx.obj.config.save()
with get_database(ctx.obj.config) as db:
with db.connection() as conn:
conn.put_config('private-key', database['private-key'])
conn.put_config('note', config['note'])
conn.put_config('whitelist-enabled', config['whitelist_enabled'])
with click.progressbar(
database['relay-list'].values(),
label = 'Inboxes'.ljust(15),
width = 0
) as inboxes:
for inbox in inboxes:
if inbox['software'] in {'akkoma', 'pleroma'}:
actor = f'https://{inbox["domain"]}/relay'
elif inbox['software'] == 'mastodon':
actor = f'https://{inbox["domain"]}/actor'
else:
actor = None
conn.put_inbox(
inbox['domain'],
inbox['inbox'],
actor = actor,
followid = inbox['followid'],
software = inbox['software']
)
with click.progressbar(
config['blocked_software'],
label = 'Banned software'.ljust(15),
width = 0
) as banned_software:
for software in banned_software:
conn.put_software_ban(
software,
reason = 'relay' if software in RELAY_SOFTWARE else None
)
with click.progressbar(
config['blocked_instances'],
label = 'Banned domains'.ljust(15),
width = 0
) as banned_software:
for domain in banned_software:
conn.put_domain_ban(domain)
with click.progressbar(
config['whitelist'],
label = 'Whitelist'.ljust(15),
width = 0
) as whitelist:
for instance in whitelist:
conn.put_domain_whitelist(instance)
click.echo('Finished converting old config and database :3')
@cli.command('edit-config')
@click.option('--editor', '-e', help = 'Text editor to use')
@click.pass_context
def cli_editconfig(ctx: click.Context, editor: str) -> None:
'Edit the config file'
click.edit(
editor = editor,
filename = str(ctx.obj.config.path)
)
@cli.group('config')
def cli_config() -> None:
'Manage the relay settings stored in the database'
@cli_config.command('list')
@click.pass_context
def cli_config_list(ctx: click.Context) -> None:
'List the current relay config'
click.echo('Relay Config:')
with ctx.obj.database.connection() as conn:
for key, value in conn.get_config_all().items():
if key not in CONFIG_IGNORE:
key = f'{key}:'.ljust(20)
click.echo(f'- {key} {value}')
@cli_config.command('set')
@click.argument('key')
@click.argument('value')
@click.pass_context
def cli_config_set(ctx: click.Context, key: str, value: Any) -> None:
'Set a config value'
with ctx.obj.database.connection() as conn:
new_value = conn.put_config(key, value)
print(f'{key}: {repr(new_value)}')
@cli.group('inbox')
def cli_inbox() -> None:
'Manage the inboxes in the database'
@cli_inbox.command('list')
@click.pass_context
def cli_inbox_list(ctx: click.Context) -> None:
'List the connected instances or relays'
click.echo('Connected to the following instances or relays:')
with ctx.obj.database.connection() as conn:
for inbox in conn.execute('SELECT * FROM inboxes'):
click.echo(f'- {inbox["inbox"]}')
@cli_inbox.command('follow')
@click.argument('actor')
@click.pass_context
def cli_inbox_follow(ctx: click.Context, actor: str) -> None:
'Follow an actor (Relay must be running)'
with ctx.obj.database.connection() as conn:
if conn.get_domain_ban(actor):
click.echo(f'Error: Refusing to follow banned actor: {actor}')
return
if (inbox_data := conn.get_inbox(actor)):
inbox = inbox_data['inbox']
else:
if not actor.startswith('http'):
actor = f'https://{actor}/actor'
if not (actor_data := asyncio.run(http.get(actor, sign_headers = True))):
click.echo(f'Failed to fetch actor: {actor}')
return
inbox = actor_data.shared_inbox
message = Message.new_follow(
host = ctx.obj.config.domain,
actor = actor
)
asyncio.run(http.post(inbox, message, None, inbox_data))
click.echo(f'Sent follow message to actor: {actor}')
@cli_inbox.command('unfollow')
@click.argument('actor')
@click.pass_context
def cli_inbox_unfollow(ctx: click.Context, actor: str) -> None:
'Unfollow an actor (Relay must be running)'
inbox_data: Row = None
with ctx.obj.database.connection() as conn:
if conn.get_domain_ban(actor):
click.echo(f'Error: Refusing to follow banned actor: {actor}')
return
if (inbox_data := conn.get_inbox(actor)):
inbox = inbox_data['inbox']
message = Message.new_unfollow(
host = ctx.obj.config.domain,
actor = actor,
follow = inbox_data['followid']
)
else:
if not actor.startswith('http'):
actor = f'https://{actor}/actor'
actor_data = asyncio.run(http.get(actor, sign_headers = True))
inbox = actor_data.shared_inbox
message = Message.new_unfollow(
host = ctx.obj.config.domain,
actor = actor,
follow = {
'type': 'Follow',
'object': actor,
'actor': f'https://{ctx.obj.config.domain}/actor'
}
)
asyncio.run(http.post(inbox, message, inbox_data))
click.echo(f'Sent unfollow message to: {actor}')
@cli_inbox.command('add')
@click.argument('inbox')
@click.option('--actor', '-a', help = 'Actor url for the inbox')
@click.option('--followid', '-f', help = 'Url for the follow activity')
@click.option('--software', '-s',
type = click.Choice(SOFTWARE),
help = 'Nodeinfo software name of the instance'
) # noqa: E124
@click.pass_context
def cli_inbox_add(
ctx: click.Context,
inbox: str,
actor: str | None = None,
followid: str | None = None,
software: str | None = None) -> None:
'Add an inbox to the database'
if not inbox.startswith('http'):
domain = inbox
inbox = f'https://{inbox}/inbox'
else:
domain = urlparse(inbox).netloc
if not software:
if (nodeinfo := asyncio.run(http.fetch_nodeinfo(domain))):
software = nodeinfo.sw_name
if not actor and software:
try:
actor = ACTOR_FORMATS[software].format(domain = domain)
except KeyError:
pass
with ctx.obj.database.connection() as conn:
if conn.get_domain_ban(domain):
click.echo(f'Refusing to add banned inbox: {inbox}')
return
if conn.get_inbox(inbox):
click.echo(f'Error: Inbox already in database: {inbox}')
return
conn.put_inbox(domain, inbox, actor, followid, software)
click.echo(f'Added inbox to the database: {inbox}')
@cli_inbox.command('remove')
@click.argument('inbox')
@click.pass_context
def cli_inbox_remove(ctx: click.Context, inbox: str) -> None:
'Remove an inbox from the database'
with ctx.obj.database.connection() as conn:
if not conn.del_inbox(inbox):
click.echo(f'Inbox not in database: {inbox}')
return
click.echo(f'Removed inbox from the database: {inbox}')
@cli.group('instance')
def cli_instance() -> None:
'Manage instance bans'
@cli_instance.command('list')
@click.pass_context
def cli_instance_list(ctx: click.Context) -> None:
'List all banned instances'
click.echo('Banned domains:')
with ctx.obj.database.connection() as conn:
for instance in conn.execute('SELECT * FROM domain_bans'):
if instance['reason']:
click.echo(f'- {instance["domain"]} ({instance["reason"]})')
else:
click.echo(f'- {instance["domain"]}')
@cli_instance.command('ban')
@click.argument('domain')
@click.option('--reason', '-r', help = 'Public note about why the domain is banned')
@click.option('--note', '-n', help = 'Internal note that will only be seen by admins and mods')
@click.pass_context
def cli_instance_ban(ctx: click.Context, domain: str, reason: str, note: str) -> None:
'Ban an instance and remove the associated inbox if it exists'
with ctx.obj.database.connection() as conn:
if conn.get_domain_ban(domain):
click.echo(f'Domain already banned: {domain}')
return
conn.put_domain_ban(domain, reason, note)
conn.del_inbox(domain)
click.echo(f'Banned instance: {domain}')
@cli_instance.command('unban')
@click.argument('domain')
@click.pass_context
def cli_instance_unban(ctx: click.Context, domain: str) -> None:
'Unban an instance'
with ctx.obj.database.connection() as conn:
if not conn.del_domain_ban(domain):
click.echo(f'Instance wasn\'t banned: {domain}')
return
click.echo(f'Unbanned instance: {domain}')
@cli_instance.command('update')
@click.argument('domain')
@click.option('--reason', '-r')
@click.option('--note', '-n')
@click.pass_context
def cli_instance_update(ctx: click.Context, domain: str, reason: str, note: str) -> None:
'Update the public reason or internal note for a domain ban'
if not (reason or note):
ctx.fail('Must pass --reason or --note')
with ctx.obj.database.connection() as conn:
if not (row := conn.update_domain_ban(domain, reason, note)):
click.echo(f'Failed to update domain ban: {domain}')
return
click.echo(f'Updated domain ban: {domain}')
if row['reason']:
click.echo(f'- {row["domain"]} ({row["reason"]})')
else:
click.echo(f'- {row["domain"]}')
@cli.group('software')
def cli_software() -> None:
'Manage banned software'
@cli_software.command('list')
@click.pass_context
def cli_software_list(ctx: click.Context) -> None:
'List all banned software'
click.echo('Banned software:')
with ctx.obj.database.connection() as conn:
for software in conn.execute('SELECT * FROM software_bans'):
if software['reason']:
click.echo(f'- {software["name"]} ({software["reason"]})')
else:
click.echo(f'- {software["name"]}')
@cli_software.command('ban')
@click.argument('name')
@click.option('--reason', '-r')
@click.option('--note', '-n')
@click.option(
'--fetch-nodeinfo', '-f',
is_flag = True,
help = 'Treat NAME like a domain and try to fetch the software name from nodeinfo'
)
@click.pass_context
def cli_software_ban(ctx: click.Context,
name: str,
reason: str,
note: str,
fetch_nodeinfo: bool) -> None:
'Ban software. Use RELAYS for NAME to ban relays'
with ctx.obj.database.connection() as conn:
if name == 'RELAYS':
for software in RELAY_SOFTWARE:
if conn.get_software_ban(software):
click.echo(f'Relay already banned: {software}')
continue
conn.put_software_ban(software, reason or 'relay', note)
click.echo('Banned all relay software')
return
if fetch_nodeinfo:
if not (nodeinfo := asyncio.run(http.fetch_nodeinfo(name))):
click.echo(f'Failed to fetch software name from domain: {name}')
return
name = nodeinfo.sw_name
if conn.get_software_ban(name):
click.echo(f'Software already banned: {name}')
return
if not conn.put_software_ban(name, reason, note):
click.echo(f'Failed to ban software: {name}')
return
click.echo(f'Banned software: {name}')
@cli_software.command('unban')
@click.argument('name')
@click.option('--reason', '-r')
@click.option('--note', '-n')
@click.option(
'--fetch-nodeinfo', '-f',
is_flag = True,
help = 'Treat NAME like a domain and try to fetch the software name from nodeinfo'
)
@click.pass_context
def cli_software_unban(ctx: click.Context, name: str, fetch_nodeinfo: bool) -> None:
'Ban software. Use RELAYS for NAME to unban relays'
with ctx.obj.database.connection() as conn:
if name == 'RELAYS':
for software in RELAY_SOFTWARE:
if not conn.del_software_ban(software):
click.echo(f'Relay was not banned: {software}')
click.echo('Unbanned all relay software')
return
if fetch_nodeinfo:
if not (nodeinfo := asyncio.run(http.fetch_nodeinfo(name))):
click.echo(f'Failed to fetch software name from domain: {name}')
return
name = nodeinfo.sw_name
if not conn.del_software_ban(name):
click.echo(f'Software was not banned: {name}')
return
click.echo(f'Unbanned software: {name}')
@cli_software.command('update')
@click.argument('name')
@click.option('--reason', '-r')
@click.option('--note', '-n')
@click.pass_context
def cli_software_update(ctx: click.Context, name: str, reason: str, note: str) -> None:
'Update the public reason or internal note for a software ban'
if not (reason or note):
ctx.fail('Must pass --reason or --note')
with ctx.obj.database.connection() as conn:
if not (row := conn.update_software_ban(name, reason, note)):
click.echo(f'Failed to update software ban: {name}')
return
click.echo(f'Updated software ban: {name}')
if row['reason']:
click.echo(f'- {row["name"]} ({row["reason"]})')
else:
click.echo(f'- {row["name"]}')
@cli.group('whitelist')
def cli_whitelist() -> None:
'Manage the instance whitelist'
@cli_whitelist.command('list')
@click.pass_context
def cli_whitelist_list(ctx: click.Context) -> None:
'List all the instances in the whitelist'
click.echo('Current whitelisted domains:')
with ctx.obj.database.connection() as conn:
for domain in conn.execute('SELECT * FROM whitelist'):
click.echo(f'- {domain["domain"]}')
@cli_whitelist.command('add')
@click.argument('domain')
@click.pass_context
def cli_whitelist_add(ctx: click.Context, domain: str) -> None:
'Add a domain to the whitelist'
with ctx.obj.database.connection() as conn:
if conn.get_domain_whitelist(domain):
click.echo(f'Instance already in the whitelist: {domain}')
return
conn.put_domain_whitelist(domain)
click.echo(f'Instance added to the whitelist: {domain}')
@cli_whitelist.command('remove')
@click.argument('domain')
@click.pass_context
def cli_whitelist_remove(ctx: click.Context, domain: str) -> None:
'Remove an instance from the whitelist'
with ctx.obj.database.connection() as conn:
if not conn.del_domain_whitelist(domain):
click.echo(f'Domain not in the whitelist: {domain}')
return
if conn.get_config('whitelist-enabled'):
if conn.del_inbox(domain):
click.echo(f'Removed inbox for domain: {domain}')
click.echo(f'Removed domain from the whitelist: {domain}')
@cli_whitelist.command('import')
@click.pass_context
def cli_whitelist_import(ctx: click.Context) -> None:
'Add all current inboxes to the whitelist'
with ctx.obj.database.connection() as conn:
for inbox in conn.execute('SELECT * FROM inboxes').all():
if conn.get_domain_whitelist(inbox['domain']):
click.echo(f'Domain already in whitelist: {inbox["domain"]}')
continue
conn.put_domain_whitelist(inbox['domain'])
click.echo('Imported whitelist from inboxes')
def main() -> None:
# pylint: disable=no-value-for-parameter
cli(prog_name='relay')
if __name__ == '__main__':
main()
click.echo('Running relay.manage is depreciated. Run `activityrelay [command]` instead.')

298
relay/misc.py Normal file
View file

@ -0,0 +1,298 @@
from __future__ import annotations
import json
import os
import socket
import typing
from aiohttp.abc import AbstractView
from aiohttp.hdrs import METH_ALL as METHODS
from aiohttp.web import Response as AiohttpResponse
from aiohttp.web_exceptions import HTTPMethodNotAllowed
from aputils.message import Message as ApMessage
from functools import cached_property
from uuid import uuid4
if typing.TYPE_CHECKING:
from collections.abc import Awaitable, Coroutine, Generator
from tinysql import Connection
from typing import Any
from .application import Application
from .cache import Cache
from .config import Config
from .database import Database
from .http_client import HttpClient
IS_DOCKER = bool(os.environ.get('DOCKER_RUNNING'))
MIMETYPES = {
'activity': 'application/activity+json',
'html': 'text/html',
'json': 'application/json',
'text': 'text/plain'
}
NODEINFO_NS = {
'20': 'http://nodeinfo.diaspora.software/ns/schema/2.0',
'21': 'http://nodeinfo.diaspora.software/ns/schema/2.1'
}
def boolean(value: Any) -> bool:
if isinstance(value, str):
if value.lower() in {'on', 'y', 'yes', 'true', 'enable', 'enabled', '1'}:
return True
if value.lower() in {'off', 'n', 'no', 'false', 'disable', 'disabled', '0'}:
return False
raise TypeError(f'Cannot parse string "{value}" as a boolean')
if isinstance(value, int):
if value == 1:
return True
if value == 0:
return False
raise ValueError('Integer value must be 1 or 0')
if value is None:
return False
return bool(value)
def check_open_port(host: str, port: int) -> bool:
if host == '0.0.0.0':
host = '127.0.0.1'
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
try:
return s.connect_ex((host, port)) != 0
except socket.error:
return False
def get_app() -> Application:
from .application import Application # pylint: disable=import-outside-toplevel
if not Application.DEFAULT:
raise ValueError('No default application set')
return Application.DEFAULT
class Message(ApMessage):
@classmethod
def new_actor(cls: type[Message], # pylint: disable=arguments-differ
host: str,
pubkey: str,
description: str | None = None) -> Message:
return cls({
'@context': 'https://www.w3.org/ns/activitystreams',
'id': f'https://{host}/actor',
'type': 'Application',
'preferredUsername': 'relay',
'name': 'ActivityRelay',
'summary': description or 'ActivityRelay bot',
'followers': f'https://{host}/followers',
'following': f'https://{host}/following',
'inbox': f'https://{host}/inbox',
'url': f'https://{host}/',
'endpoints': {
'sharedInbox': f'https://{host}/inbox'
},
'publicKey': {
'id': f'https://{host}/actor#main-key',
'owner': f'https://{host}/actor',
'publicKeyPem': pubkey
}
})
@classmethod
def new_announce(cls: type[Message], host: str, obj: str) -> Message:
return cls({
'@context': 'https://www.w3.org/ns/activitystreams',
'id': f'https://{host}/activities/{uuid4()}',
'type': 'Announce',
'to': [f'https://{host}/followers'],
'actor': f'https://{host}/actor',
'object': obj
})
@classmethod
def new_follow(cls: type[Message], host: str, actor: str) -> Message:
return cls({
'@context': 'https://www.w3.org/ns/activitystreams',
'type': 'Follow',
'to': [actor],
'object': actor,
'id': f'https://{host}/activities/{uuid4()}',
'actor': f'https://{host}/actor'
})
@classmethod
def new_unfollow(cls: type[Message], host: str, actor: str, follow: str) -> Message:
return cls({
'@context': 'https://www.w3.org/ns/activitystreams',
'id': f'https://{host}/activities/{uuid4()}',
'type': 'Undo',
'to': [actor],
'actor': f'https://{host}/actor',
'object': follow
})
@classmethod
def new_response(cls: type[Message],
host: str,
actor: str,
followid: str,
accept: bool) -> Message:
return cls({
'@context': 'https://www.w3.org/ns/activitystreams',
'id': f'https://{host}/activities/{uuid4()}',
'type': 'Accept' if accept else 'Reject',
'to': [actor],
'actor': f'https://{host}/actor',
'object': {
'id': followid,
'type': 'Follow',
'object': f'https://{host}/actor',
'actor': actor
}
})
# todo: remove when fixed in aputils
@property
def object_id(self) -> str:
try:
return self["object"]["id"]
except (KeyError, TypeError):
return self["object"]
class Response(AiohttpResponse):
# AiohttpResponse.__len__ method returns 0, so bool(response) always returns False
def __bool__(self) -> bool:
return True
@classmethod
def new(cls: type[Response],
body: str | bytes | dict = '',
status: int = 200,
headers: dict[str, str] | None = None,
ctype: str = 'text') -> Response:
kwargs = {
'status': status,
'headers': headers,
'content_type': MIMETYPES[ctype]
}
if isinstance(body, bytes):
kwargs['body'] = body
elif isinstance(body, dict) and ctype in {'json', 'activity'}:
kwargs['text'] = json.dumps(body)
else:
kwargs['text'] = body
return cls(**kwargs)
@classmethod
def new_error(cls: type[Response],
status: int,
body: str | bytes | dict,
ctype: str = 'text') -> Response:
if ctype == 'json':
body = json.dumps({'status': status, 'error': body})
return cls.new(body=body, status=status, ctype=ctype)
@property
def location(self) -> str:
return self.headers.get('Location')
@location.setter
def location(self, value: str) -> None:
self.headers['Location'] = value
class View(AbstractView):
def __await__(self) -> Generator[Response]:
if self.request.method not in METHODS:
raise HTTPMethodNotAllowed(self.request.method, self.allowed_methods)
if not (handler := self.handlers.get(self.request.method)):
raise HTTPMethodNotAllowed(self.request.method, self.allowed_methods) from None
return self._run_handler(handler).__await__()
async def _run_handler(self, handler: Awaitable) -> Response:
with self.database.config.connection_class(self.database) as conn:
# todo: remove on next tinysql release
conn.open()
return await handler(self.request, conn, **self.request.match_info)
@cached_property
def allowed_methods(self) -> tuple[str]:
return tuple(self.handlers.keys())
@cached_property
def handlers(self) -> dict[str, Coroutine]:
data = {}
for method in METHODS:
try:
data[method] = getattr(self, method.lower())
except AttributeError:
continue
return data
# app components
@property
def app(self) -> Application:
return self.request.app
@property
def cache(self) -> Cache:
return self.app.cache
@property
def client(self) -> HttpClient:
return self.app.client
@property
def config(self) -> Config:
return self.app.config
@property
def database(self) -> Database:
return self.app.database

View file

@ -1,67 +0,0 @@
import subprocess
import urllib.parse
import aiohttp.web
from . import app
from .database import DATABASE
try:
commit_label = subprocess.check_output(["git", "rev-parse", "HEAD"]).strip().decode('ascii')
except:
commit_label = '???'
nodeinfo_template = {
# XXX - is this valid for a relay?
'openRegistrations': True,
'protocols': ['activitypub'],
'services': {
'inbound': [],
'outbound': []
},
'software': {
'name': 'activityrelay',
'version': '0.1 {}'.format(commit_label)
},
'usage': {
'localPosts': 0,
'users': {
'total': 1
}
},
'version': '2.0'
}
def get_peers():
global DATABASE
return [urllib.parse.urlsplit(inbox).hostname for inbox in DATABASE.get('relay-list', [])]
async def nodeinfo_2_0(request):
data = nodeinfo_template.copy()
data['metadata'] = {
'peers': get_peers()
}
return aiohttp.web.json_response(data)
app.router.add_get('/nodeinfo/2.0.json', nodeinfo_2_0)
async def nodeinfo_wellknown(request):
data = {
'links': [
{
'rel': 'http://nodeinfo.diaspora.software/ns/schema/2.0',
'href': 'https://{}/nodeinfo/2.0.json'.format(request.host)
}
]
}
return aiohttp.web.json_response(data)
app.router.add_get('/.well-known/nodeinfo', nodeinfo_wellknown)

199
relay/processors.py Normal file
View file

@ -0,0 +1,199 @@
from __future__ import annotations
import typing
from . import logger as logging
from .database.connection import Connection
from .misc import Message
if typing.TYPE_CHECKING:
from .views import ActorView
def person_check(actor: str, software: str) -> bool:
# pleroma and akkoma may use Person for the actor type for some reason
# akkoma changed this in 3.6.0
if software in {'akkoma', 'pleroma'} and actor.id == f'https://{actor.domain}/relay':
return False
# make sure the actor is an application
if actor.type != 'Application':
return True
return False
async def handle_relay(view: ActorView, conn: Connection) -> None:
try:
view.cache.get('handle-relay', view.message.object_id)
logging.verbose('already relayed %s', view.message.object_id)
return
except KeyError:
pass
message = Message.new_announce(view.config.domain, view.message.object_id)
logging.debug('>> relay: %s', message)
for inbox in conn.distill_inboxes(view.message):
view.app.push_message(inbox, message, view.instance)
view.cache.set('handle-relay', view.message.object_id, message.id, 'str')
async def handle_forward(view: ActorView, conn: Connection) -> None:
try:
view.cache.get('handle-relay', view.message.object_id)
logging.verbose('already forwarded %s', view.message.object_id)
return
except KeyError:
pass
message = Message.new_announce(view.config.domain, view.message)
logging.debug('>> forward: %s', message)
for inbox in conn.distill_inboxes(view.message):
view.app.push_message(inbox, message, view.instance)
view.cache.set('handle-relay', view.message.object_id, message.id, 'str')
async def handle_follow(view: ActorView, conn: Connection) -> None:
nodeinfo = await view.client.fetch_nodeinfo(view.actor.domain)
software = nodeinfo.sw_name if nodeinfo else None
# reject if software used by actor is banned
if conn.get_software_ban(software):
view.app.push_message(
view.actor.shared_inbox,
Message.new_response(
host = view.config.domain,
actor = view.actor.id,
followid = view.message.id,
accept = False
)
)
logging.verbose(
'Rejected follow from actor for using specific software: actor=%s, software=%s',
view.actor.id,
software
)
return
## reject if the actor is not an instance actor
if person_check(view.actor, software):
view.app.push_message(
view.actor.shared_inbox,
Message.new_response(
host = view.config.domain,
actor = view.actor.id,
followid = view.message.id,
accept = False
)
)
logging.verbose('Non-application actor tried to follow: %s', view.actor.id)
return
if conn.get_inbox(view.actor.shared_inbox):
view.instance = conn.update_inbox(view.actor.shared_inbox, followid = view.message.id)
else:
with conn.transaction():
view.instance = conn.put_inbox(
view.actor.domain,
view.actor.shared_inbox,
view.actor.id,
view.message.id,
software
)
view.app.push_message(
view.actor.shared_inbox,
Message.new_response(
host = view.config.domain,
actor = view.actor.id,
followid = view.message.id,
accept = True
),
view.instance
)
# Are Akkoma and Pleroma the only two that expect a follow back?
# Ignoring only Mastodon for now
if software != 'mastodon':
view.app.push_message(
view.actor.shared_inbox,
Message.new_follow(
host = view.config.domain,
actor = view.actor.id
),
view.instance
)
async def handle_undo(view: ActorView, conn: Connection) -> None:
## If the object is not a Follow, forward it
if view.message.object['type'] != 'Follow':
await handle_forward(view, conn)
return
with conn.transaction():
if not conn.del_inbox(view.actor.id):
logging.verbose(
'Failed to delete "%s" with follow ID "%s"',
view.actor.id,
view.message.object['id']
)
view.app.push_message(
view.actor.shared_inbox,
Message.new_unfollow(
host = view.config.domain,
actor = view.actor.id,
follow = view.message
),
view.instance
)
processors = {
'Announce': handle_relay,
'Create': handle_relay,
'Delete': handle_forward,
'Follow': handle_follow,
'Undo': handle_undo,
'Update': handle_forward,
}
async def run_processor(view: ActorView, conn: Connection) -> None:
if view.message.type not in processors:
logging.verbose(
'Message type "%s" from actor cannot be handled: %s',
view.message.type,
view.actor.id
)
return
if view.instance:
with conn.transaction():
if not view.instance['software']:
if (nodeinfo := await view.client.fetch_nodeinfo(view.instance['domain'])):
view.instance = conn.update_inbox(
view.instance['inbox'],
software = nodeinfo.sw_name
)
if not view.instance['actor']:
view.instance = conn.update_inbox(
view.instance['inbox'],
actor = view.actor.id
)
logging.verbose('New "%s" from actor: %s', view.message.type, view.actor.id)
await processors[view.message.type](view, conn)

View file

@ -1,56 +0,0 @@
import logging
import aiohttp
from cachetools import TTLCache
from datetime import datetime
from urllib.parse import urlsplit
from . import CONFIG
from .http_debug import http_debug
CACHE_SIZE = CONFIG.get('cache-size', 16384)
CACHE_TTL = CONFIG.get('cache-ttl', 3600)
ACTORS = TTLCache(CACHE_SIZE, CACHE_TTL)
async def fetch_actor(uri, headers={}, force=False, sign_headers=True):
if uri in ACTORS and not force:
return ACTORS[uri]
from .actor import PRIVKEY
from .http_signatures import sign_headers
url = urlsplit(uri)
key_id = 'https://{}/actor#main-key'.format(CONFIG['ap']['host'])
headers.update({
'Accept': 'application/activity+json',
'User-Agent': 'ActivityRelay'
})
if sign_headers:
headers.update({
'(request-target)': 'get {}'.format(url.path),
'Date': datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT'),
'Host': url.netloc
})
headers['signature'] = sign_headers(headers, PRIVKEY, key_id)
headers.pop('(request-target)')
headers.pop('Host')
try:
async with aiohttp.ClientSession(trace_configs=[http_debug()]) as session:
async with session.get(uri, headers=headers) as resp:
if resp.status != 200:
return None
ACTORS[uri] = (await resp.json(encoding='utf-8', content_type=None))
return ACTORS[uri]
except Exception as e:
logging.info('Caught %r while fetching actor %r.', e, uri)
return None

277
relay/views.py Normal file
View file

@ -0,0 +1,277 @@
from __future__ import annotations
import subprocess
import traceback
import typing
from aputils.errors import SignatureFailureError
from aputils.misc import Digest, HttpDate, Signature
from aputils.objects import Nodeinfo, Webfinger, WellKnownNodeinfo
from pathlib import Path
from . import __version__
from . import logger as logging
from .database.connection import Connection
from .misc import Message, Response, View
from .processors import run_processor
if typing.TYPE_CHECKING:
from aiohttp.web import Request
from aputils.signer import Signer
from collections.abc import Callable
from tinysql import Row
VIEWS = []
VERSION = __version__
HOME_TEMPLATE = """
<html><head>
<title>ActivityPub Relay at {host}</title>
<style>
p {{ color: #FFFFFF; font-family: monospace, arial; font-size: 100%; }}
body {{ background-color: #000000; }}
a {{ color: #26F; }}
a:visited {{ color: #46C; }}
a:hover {{ color: #8AF; }}
</style>
</head>
<body>
<p>This is an Activity Relay for fediverse instances.</p>
<p>{note}</p>
<p>
You may subscribe to this relay with the address:
<a href="https://{host}/actor">https://{host}/actor</a>
</p>
<p>
To host your own relay, you may download the code at this address:
<a href="https://git.pleroma.social/pleroma/relay">
https://git.pleroma.social/pleroma/relay
</a>
</p>
<br><p>List of {count} registered instances:<br>{targets}</p>
</body></html>
"""
if Path(__file__).parent.parent.joinpath('.git').exists():
try:
commit_label = subprocess.check_output(["git", "rev-parse", "HEAD"]).strip().decode('ascii')
VERSION = f'{__version__} {commit_label}'
except Exception:
pass
def register_route(*paths: str) -> Callable:
def wrapper(view: View) -> View:
for path in paths:
VIEWS.append([path, view])
return View
return wrapper
# pylint: disable=unused-argument
@register_route('/')
class HomeView(View):
async def get(self, request: Request, conn: Connection) -> Response:
config = conn.get_config_all()
inboxes = conn.execute('SELECT * FROM inboxes').all()
text = HOME_TEMPLATE.format(
host = self.config.domain,
note = config['note'],
count = len(inboxes),
targets = '<br>'.join(inbox['domain'] for inbox in inboxes)
)
return Response.new(text, ctype='html')
@register_route('/actor', '/inbox')
class ActorView(View):
def __init__(self, request: Request):
View.__init__(self, request)
self.signature: Signature = None
self.message: Message = None
self.actor: Message = None
self.instance: Row = None
self.signer: Signer = None
async def get(self, request: Request, conn: Connection) -> Response:
data = Message.new_actor(
host = self.config.domain,
pubkey = self.app.signer.pubkey
)
return Response.new(data, ctype='activity')
async def post(self, request: Request, conn: Connection) -> Response:
if response := await self.get_post_data():
return response
self.instance = conn.get_inbox(self.actor.shared_inbox)
config = conn.get_config_all()
## reject if the actor isn't whitelisted while the whiltelist is enabled
if config['whitelist-enabled'] and not conn.get_domain_whitelist(self.actor.domain):
logging.verbose('Rejected actor for not being in the whitelist: %s', self.actor.id)
return Response.new_error(403, 'access denied', 'json')
## reject if actor is banned
if conn.get_domain_ban(self.actor.domain):
logging.verbose('Ignored request from banned actor: %s', self.actor.id)
return Response.new_error(403, 'access denied', 'json')
## reject if activity type isn't 'Follow' and the actor isn't following
if self.message.type != 'Follow' and not self.instance:
logging.verbose(
'Rejected actor for trying to post while not following: %s',
self.actor.id
)
return Response.new_error(401, 'access denied', 'json')
logging.debug('>> payload %s', self.message.to_json(4))
await run_processor(self, conn)
return Response.new(status = 202)
async def get_post_data(self) -> Response | None:
try:
self.signature = Signature.new_from_signature(self.request.headers['signature'])
except KeyError:
logging.verbose('Missing signature header')
return Response.new_error(400, 'missing signature header', 'json')
try:
self.message = await self.request.json(loads = Message.parse)
except Exception:
traceback.print_exc()
logging.verbose('Failed to parse inbox message')
return Response.new_error(400, 'failed to parse message', 'json')
if self.message is None:
logging.verbose('empty message')
return Response.new_error(400, 'missing message', 'json')
if 'actor' not in self.message:
logging.verbose('actor not in message')
return Response.new_error(400, 'no actor in message', 'json')
self.actor = await self.client.get(
self.signature.keyid,
sign_headers = True,
loads = Message.parse
)
if not self.actor:
# ld signatures aren't handled atm, so just ignore it
if self.message.type == 'Delete':
logging.verbose('Instance sent a delete which cannot be handled')
return Response.new(status=202)
logging.verbose(f'Failed to fetch actor: {self.signature.keyid}')
return Response.new_error(400, 'failed to fetch actor', 'json')
try:
self.signer = self.actor.signer
except KeyError:
logging.verbose('Actor missing public key: %s', self.signature.keyid)
return Response.new_error(400, 'actor missing public key', 'json')
try:
self.validate_signature(await self.request.read())
except SignatureFailureError as e:
logging.verbose('signature validation failed for "%s": %s', self.actor.id, e)
return Response.new_error(401, str(e), 'json')
def validate_signature(self, body: bytes) -> None:
headers = {key.lower(): value for key, value in self.request.headers.items()}
headers["(request-target)"] = " ".join([self.request.method.lower(), self.request.path])
if (digest := Digest.new_from_digest(headers.get("digest"))):
if not body:
raise SignatureFailureError("Missing body for digest verification")
if not digest.validate(body):
raise SignatureFailureError("Body digest does not match")
if self.signature.algorithm_type == "hs2019":
if "(created)" not in self.signature.headers:
raise SignatureFailureError("'(created)' header not used")
current_timestamp = HttpDate.new_utc().timestamp()
if self.signature.created > current_timestamp:
raise SignatureFailureError("Creation date after current date")
if current_timestamp > self.signature.expires:
raise SignatureFailureError("Expiration date before current date")
headers["(created)"] = self.signature.created
headers["(expires)"] = self.signature.expires
# pylint: disable=protected-access
if not self.signer._validate_signature(headers, self.signature):
raise SignatureFailureError("Signature does not match")
@register_route('/.well-known/webfinger')
class WebfingerView(View):
async def get(self, request: Request, conn: Connection) -> Response:
try:
subject = request.query['resource']
except KeyError:
return Response.new_error(400, 'missing "resource" query key', 'json')
if subject != f'acct:relay@{self.config.domain}':
return Response.new_error(404, 'user not found', 'json')
data = Webfinger.new(
handle = 'relay',
domain = self.config.domain,
actor = self.config.actor
)
return Response.new(data, ctype = 'json')
@register_route('/nodeinfo/{niversion:\\d.\\d}.json', '/nodeinfo/{niversion:\\d.\\d}')
class NodeinfoView(View):
# pylint: disable=no-self-use
async def get(self, request: Request, conn: Connection, niversion: str) -> Response:
inboxes = conn.execute('SELECT * FROM inboxes').all()
data = {
'name': 'activityrelay',
'version': VERSION,
'protocols': ['activitypub'],
'open_regs': not conn.get_config('whitelist-enabled'),
'users': 1,
'metadata': {'peers': [inbox['domain'] for inbox in inboxes]}
}
if niversion == '2.1':
data['repo'] = 'https://git.pleroma.social/pleroma/relay'
return Response.new(Nodeinfo.new(**data), ctype = 'json')
@register_route('/.well-known/nodeinfo')
class WellknownNodeinfoView(View):
async def get(self, request: Request, conn: Connection) -> Response:
data = WellKnownNodeinfo.new_template(self.config.domain)
return Response.new(data, ctype = 'json')

View file

@ -1,24 +0,0 @@
import aiohttp.web
from . import app
async def webfinger(request):
subject = request.query['resource']
if subject != 'acct:relay@{}'.format(request.host):
return aiohttp.web.json_response({'error': 'user not found'}, status=404)
actor_uri = "https://{}/actor".format(request.host)
data = {
"aliases": [actor_uri],
"links": [
{"href": actor_uri, "rel": "self", "type": "application/activity+json"},
{"href": actor_uri, "rel": "self", "type": "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\""}
],
"subject": subject
}
return aiohttp.web.json_response(data)
app.router.add_get('/.well-known/webfinger', webfinger)

10
requirements.txt Normal file
View file

@ -0,0 +1,10 @@
aiohttp>=3.9.1
aputils@https://git.barkshark.xyz/barkshark/aputils/archive/0.1.6a.tar.gz
click>=8.1.2
gunicorn==21.1.0
hiredis==2.3.2
pyyaml>=6.0
redis==5.0.1
tinysql[postgres]@https://git.barkshark.xyz/barkshark/tinysql/archive/0.2.4.tar.gz
importlib_resources==6.1.1;python_version<'3.9'

View file

@ -1,5 +1,6 @@
[metadata]
name = relay
version = attr: relay.__version__
description = Generic LitePub relay (works with all LitePub consumers and Mastodon)
long_description = file: README.md
long_description_content_type = text/markdown; charset=UTF-8
@ -9,30 +10,35 @@ license_file = LICENSE
classifiers =
Environment :: Console
License :: OSI Approved :: AGPLv3 License
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: 3.8
Programming Language :: Python :: 3.9
Programming Language :: Python :: 3.10
Programming Language :: Python :: 3.11
Programming Language :: Python :: 3.12
project_urls =
Source = https://git.pleroma.social/pleroma/relay
Tracker = https://git.pleroma.social/pleroma/relay/-/issues
[options]
zip_safe = False
packages = find:
install_requires =
aiohttp>=3.5.4
async-timeout>=3.0.0
attrs>=18.1.0
chardet>=3.0.4
idna>=2.7
idna-ssl>=1.1.0; python_version < "3.7"
multidict>=4.3.1
pycryptodome>=3.9.4
PyYAML>=5.1
simplejson>=3.16.0
yarl>=1.2.6
cachetools
async_lru
python_requires = >=3.6
packages =
relay
relay.database
include_package_data = true
install_requires = file: requirements.txt
python_requires = >=3.8
[options.extras_require]
dev = file: dev-requirements.txt
[options.package_data]
relay =
data/statements.sql
[options.entry_points]
console_scripts =
activityrelay = relay.manage:main
[flake8]
select = F401