Compare commits

..

7 commits

Author SHA1 Message Date
pch_xyz 8d5b097ac4 Add donation link 2023-02-26 05:27:32 +00:00
pch_xyz 8c49d92aea added version info 2023-02-10 01:15:20 +00:00
pch_xyz d06f51fca6 上傳檔案到「relay」
added version info
2023-02-10 01:14:32 +00:00
pch_xyz 818a3573ae Merge pull request 'localization' (#1) from pch_xyz-patch-1 into master
Reviewed-on: #1
2023-02-10 00:54:07 +00:00
pch_xyz 3dea5c030b localization 2023-02-10 00:51:40 +00:00
Izalia Mae 15b1324df2 Merge branch 'zen-master-patch-50595' into 'master'
Do not check instance's actor.type in case of Pleroma/Akkoma

See merge request pleroma/relay!50
2023-01-11 03:43:59 +00:00
Dmytro Poltavchenko 006efc1ba4 Do not check instance's actor.type in case of Pleroma/Akkoma 2023-01-08 00:23:36 +00:00
19 changed files with 830 additions and 1424 deletions

View file

@ -3,9 +3,9 @@
There are a number of commands to manage your relay's database and config. You can add `--help` to
any category or command to get help on that specific option (ex. `activityrelay inbox --help`).
A config file can be specified by adding `--config [path/to/config.yaml]`.
Note: Unless specified, it is recommended to run any commands while the relay is shutdown.
Note: `activityrelay` is only available via pip or pipx if `~/.local/bin` is in `$PATH`. If it
Note 2: `activityrelay` is only available via pip or pipx if `~/.local/bin` is in `$PATH`. If it
isn't, use `python3 -m relay` if installed via pip or `~/.local/bin/activityrelay` if installed
via pipx
@ -24,35 +24,26 @@ Run the setup wizard to configure your relay.
activityrelay setup
## Convert
Convert an old `relay.yaml` and `relay.jsonld` to the newer formats.
activityrelay convert [--old-config relay.yaml]
## Config
Manage the relay config.
Manage the relay config
activityrelay config
### List
List the current config key/value pairs.
List the current config key/value pairs
activityrelay config list
### Set
Set a value for a config option.
Set a value for a config option
activityrelay config set <key> <value>
note: The relay must be restarted if setting `log_level`, `workers`, `push_limit`, or `http_timeout`
## Inbox
@ -101,32 +92,6 @@ not exist anymore, use the `inbox remove` command instead.
Note: The relay must be running for this command to work.
## Request
Manage instance follow requests.
### List
List all instances asking to follow the relay.
activityrelay request list
### Approve
Allow an instance to join the relay.
activityrelay request approve <domain>
### Deny
Disallow an instance to join the relay.
activityrelay request deny <domain>
## Whitelist
Manage the whitelisted domains.
@ -155,7 +120,7 @@ Remove a domain from the whitelist.
### Import
Add all current inboxes to the whitelist.
Add all current inboxes to the whitelist
activityrelay whitelist import
@ -167,7 +132,7 @@ Manage the instance ban list.
### List
List the currently banned instances.
List the currently banned instances
activityrelay instance list

View file

@ -2,6 +2,14 @@
## General
### DB
The path to the database. It contains the relay actor private key and all subscribed
instances. If the path is not absolute, it is relative to the working directory.
db: relay.jsonld
### Listener
The address and port the relay will listen on. If the reverse proxy (nginx, apache, caddy, etc)
@ -11,6 +19,46 @@ is running on the same host, it is recommended to change `listen` to `localhost`
port: 8080
### Note
A small blurb to describe your relay instance. This will show up on the relay's home page.
note: "Make a note about your instance here."
### Post Limit
The maximum number of messages to send out at once. For each incoming message, a message will be
sent out to every subscribed instance minus the instance which sent the message. This limit
is to prevent too many outgoing connections from being made, so adjust if necessary.
Note: If the `workers` option is set to anything above 0, this limit will be per worker.
push_limit: 512
### Push Workers
The relay can be configured to use threads to push messages out. For smaller relays, this isn't
necessary, but bigger ones (>100 instances) will want to set this to the number of available cpu
threads.
workers: 0
### JSON GET cache limit
JSON objects (actors, nodeinfo, etc) will get cached when fetched. This will set the max number of
objects to keep in the cache.
json_cache: 1024
## AP
Various ActivityPub-related settings
### Host
The domain your relay will use to identify itself.
@ -18,123 +66,40 @@ The domain your relay will use to identify itself.
host: relay.example.com
## Database
### Whitelist Enabled
### Type
If set to `true`, only instances in the whitelist can follow the relay. Any subscribed instances
not in the whitelist will be removed from the inbox list on startup.
The type of SQL database to use. Options:
* sqlite (default)
* postgresql
* mysql
type: sqlite
whitelist_enabled: false
### Minimum Connections
### Whitelist
The minimum number of database connections to keep open (does nothing at the moment)
A list of domains of instances which are allowed to subscribe to your relay.
min_connections: 0
whitelist:
- bad-instance.example.com
- another-bad-instance.example.com
### Maximum Connections
### Blocked Instances
The maximum number of database connections to open (does nothing at the moment)
A list of instances which are unable to follow the instance. If a subscribed instance is added to
the block list, it will be removed from the inbox list on startup.
max_connections: 10
blocked_instances:
- bad-instance.example.com
- another-bad-instance.example.com
## Sqlite
### Blocked Software
### Database
A list of ActivityPub software which cannot follow your relay. This list is empty by default, but
setting this to the below list will block all other relays and prevent relay chains
The path to the database file.
database: relay.sqlite3
If the path is relative, it will be relative to the directory the config file is located. For
instance, if the config is located at `/home/izalia/.config/activityrelay/config.yaml`, the
following:
relay.sqlite3
will resolve to:
/home/izalia/.config/activityrelay/relay.sqlite3
## PostgreSQL
### Database
Name of the database to use.
database: activityrelay
### Hostname
The address to use when connecting to the database. A value of `null` will use the default of
`/var/run/postgresql`
hostname: null
### Port
The port to use when connecting to the database. A value of `null` will use the default of `5432`
port: null
### Username
The user to use when connecting to the database. A value of `null` will use the current system
username.
username: null
### Password
The password for the database user.
password: null
## MySQL
### Database
Name of the database to use.
database: activityrelay
### Hostname
The address to use when connecting to the database. A value of `null` will use the default of
`/var/run/mysqld/mysqld.sock`
### Port
The port to use when connecting to the database. A value of `null` will use the default of `3306`
port: null
### Username
The user to use when connecting to the database. A value of `null` will use the current system
username.
username: null
### Password
The password for the database user.
password: null
blocked_software:
- activityrelay
- aoderelay
- social.seattle.wa.us-relay
- unciarelay

View file

@ -5,19 +5,6 @@ proxy, and setup the relay to run via a supervisor. Example configs for caddy, n
in `installation/`
## Pre-build Executables
All in one executables can be downloaded from `https://git.pleroma.social/pleroma/relay/-/releases`
under the `Other` section of `Assets`. They don't require any extra setup and can be placed
anywhere. Run the setup wizard
./activityrelay setup
and start it up when done
./activityrelay run
## Pipx
Pipx uses pip and a custom venv implementation to automatically install modules into a Python

View file

@ -14,15 +14,7 @@ a = Analysis(
'aputils.errors',
'aputils.misc',
'aputils.objects',
'aputils.signer',
'tinysql.base',
'tinysql.database',
'tinysql.error',
'tinysql.mysql',
'tinysql.postgresql',
'tinysql.sqlite',
'tinysql.statement'
'aputils.signer'
],
hookspath=[],
hooksconfig={},

View file

@ -1,32 +1,43 @@
general:
# Address the relay will listen on. Set to "0.0.0.0" for any address
# this is the path that the object graph will get dumped to (in JSON-LD format),
# you probably shouldn't change it, but you can if you want.
db: relay.jsonld
# Listener
listen: 0.0.0.0
# TCP port the relay will listen on
port: 3621
# Domain the relay will advertise itself as
host: relay.example.com
port: 8080
database:
# SQL backend to use. Available options: "sqlite", "postgresql", "mysql".
type: sqlite
# Minimum number of database connections to keep open
min_connections: 0
# Maximum number of database connections to open
max_connections: 10
# Note
note: "Make a note about your instance here."
sqlite:
database: relay.sqlite3
# Number of worker threads to start. If 0, use asyncio futures instead of threads.
workers: 0
postgres:
database: activityrelay
hostname: null
port: null
username: null
password: null
# Maximum number of inbox posts to do at once
# If workers is set to 1 or above, this is the max for each worker
push_limit: 512
mysql:
database: activityrelay
hostname: null
port: null
username: null
password: null
# The amount of json objects to cache from GET requests
json_cache: 1024
ap:
# This is used for generating activitypub messages, as well as instructions for
# linking AP identities. It should be an SSL-enabled domain reachable by https.
host: 'relay.example.com'
blocked_instances:
- 'bad-instance.example.com'
- 'another-bad-instance.example.com'
whitelist_enabled: false
whitelist:
- 'good-instance.example.com'
- 'another.good-instance.example.com'
# uncomment the lines below to prevent certain activitypub software from posting
# to the relay (all known relays by default). this uses the software name in nodeinfo
#blocked_software:
#- 'activityrelay'
#- 'aoderelay'
#- 'social.seattle.wa.us-relay'
#- 'unciarelay'

View file

@ -1,5 +1,4 @@
import asyncio
import inspect
import logging
import os
import queue
@ -8,41 +7,49 @@ import threading
import traceback
from aiohttp import web
from aputils import Signer
from datetime import datetime, timedelta
from .config import Config
from .database import Database
from .config import RelayConfig
from .database import RelayDatabase
from .http_client import HttpClient
from .logger import set_level
from .misc import DotDict, check_open_port, set_app
from .views import routes
class Application(web.Application):
def __init__(self, cfgpath):
web.Application.__init__(self,
middlewares = [
server_middleware
]
)
set_app(self)
self['config'] = Config(cfgpath)
self['database'] = Database(**self.config.dbconfig)
self['client'] = HttpClient()
web.Application.__init__(self)
self['starttime'] = None
self['signer'] = None
self['running'] = False
self['config'] = RelayConfig(cfgpath)
if not self['config'].load():
self['config'].save()
if self.config.is_docker:
self.config.update({
'db': '/data/relay.jsonld',
'listen': '0.0.0.0',
'port': 8080
})
self['workers'] = []
self['last_worker'] = 0
self.database.create()
set_app(self)
with self.database.session as s:
set_level(s.get_config('log_level'))
self['database'] = RelayDatabase(self['config'])
self['database'].load()
self['client'] = HttpClient(
database = self.database,
limit = self.config.push_limit,
timeout = self.config.timeout,
cache_size = self.config.json_cache
)
self.set_signal_handler()
@property
@ -60,32 +67,18 @@ class Application(web.Application):
return self['database']
@property
def signer(self):
if not self['signer']:
with self.database.session as s:
privkey = s.get_config('privkey')
if not privkey:
self['signer'] = Signer.new(self.config.keyid)
s.put_config('privkey', self['signer'].export())
else:
self['signer'] = Signer(privkey, self.config.keyid)
return self['signer']
@property
def uptime(self):
if not self['starttime']:
return timedelta(seconds=0)
return datetime.now() - self['starttime']
uptime = datetime.now() - self['starttime']
return timedelta(seconds=uptime.seconds)
def push_message(self, inbox, message):
if len(self['workers']) <= 0:
if self.config.workers <= 0:
return asyncio.ensure_future(self.client.post(inbox, message))
worker = self['workers'][self['last_worker']]
@ -97,10 +90,10 @@ class Application(web.Application):
self['last_worker'] = 0
def set_signal_handler(self, enable=True):
def set_signal_handler(self):
for sig in {'SIGHUP', 'SIGINT', 'SIGQUIT', 'SIGTERM'}:
try:
signal.signal(getattr(signal, sig), self.stop if enable else signal.SIG_DFL)
signal.signal(getattr(signal, sig), self.stop)
# some signals don't exist in windows, so skip them
except AttributeError:
@ -116,26 +109,17 @@ class Application(web.Application):
logging.info(f'Starting webserver at {self.config.host} ({self.config.listen}:{self.config.port})')
asyncio.run(self.handle_run())
self.database.disconnect()
def stop(self, *_):
self['running'] = False
def setup(self):
self.client.setup()
async def handle_run(self):
self.set_signal_handler(True)
self['running'] = True
with self.database.session as s:
workers = s.get_config('workers')
if workers > 0:
for i in range(workers):
if self.config.workers > 0:
for i in range(self.config.workers):
worker = PushWorker(self)
worker.start()
@ -161,7 +145,6 @@ class Application(web.Application):
self['starttime'] = None
self['running'] = False
self['workers'].clear()
self.set_signal_handler(False)
class PushWorker(threading.Thread):
@ -172,8 +155,12 @@ class PushWorker(threading.Thread):
def run(self):
self.client = HttpClient()
self.client.setup()
self.client = HttpClient(
database = self.app.database,
limit = self.app.config.push_limit,
timeout = self.app.config.timeout,
cache_size = self.app.config.json_cache
)
asyncio.run(self.handle_queue())
@ -196,24 +183,6 @@ class PushWorker(threading.Thread):
await self.client.close()
@web.middleware
async def server_middleware(request, handler):
if len(inspect.signature(handler).parameters) == 1:
response = await handler(request)
else:
with request.database.session as s:
response = await handler(request, s)
## make sure there's some sort of response
if response == None:
logging.error(f'No response for handler: {handler}')
response = Response.new_error(500, 'No response')
response.headers['Server'] = 'ActivityRelay'
return response
## Can't sub-class web.Request, so let's just add some properties
def request_actor(self):
try: return self['actor']

View file

@ -1,135 +1,58 @@
import json
import os
import sys
import yaml
from functools import cached_property
from pathlib import Path
from platform import system
from urllib.parse import urlparse
from .misc import AppBase, DotDict
from .misc import DotDict, boolean
DEFAULTS = {
'general_listen': '0.0.0.0',
'general_port': 8080,
'general_host': 'relay.example.com',
'database_type': 'sqlite',
'database_min_connections': 0,
'database_max_connections': 10,
'sqlite_database': Path('relay.sqlite3'),
'postgres_database': 'activityrelay',
'postgres_hostname': None,
'postgres_port': None,
'postgres_username': None,
'postgres_password': None,
'mysql_database': 'activityrelay',
'mysql_hostname': None,
'mysql_port': None,
'mysql_username': None,
'mysql_password': None
}
RELAY_SOFTWARE = [
'activityrelay', # https://git.pleroma.social/pleroma/relay
'aoderelay', # https://git.asonix.dog/asonix/relay
'feditools-relay' # https://git.ptzo.gdn/feditools/relay
]
CATEGORY_NAMES = [
'general',
'database',
'sqlite',
'postgres',
'mysql'
APKEYS = [
'host',
'whitelist_enabled',
'blocked_software',
'blocked_instances',
'whitelist'
]
def get_config_dir():
cwd = Path.cwd().joinpath('config.yaml')
plat = system()
if cwd.exists():
return cwd
elif plat == 'Linux':
cfgpath = Path('~/.config/activityrelay/config.yaml').expanduser()
if cfgpath.exists():
return cfgpath
etcpath = Path('/etc/activityrelay/config.yaml')
if etcpath.exists() and os.getuid() == etcpath.stat().st_uid:
return etcpath
elif plat == 'Windows':
cfgpath = Path('~/AppData/Roaming/activityrelay/config.yaml').expanduer()
if cfgpath.exists():
return cfgpath
elif plat == 'Darwin':
cfgpath = Path('~/Library/Application Support/activityaelay/config.yaml')
return cwd
class Config(AppBase, dict):
def __init__(self, path=None):
DotDict.__init__(self, DEFAULTS)
class RelayConfig(DotDict):
def __init__(self, path):
DotDict.__init__(self, {})
if self.is_docker:
path = Path('/data/config.yaml')
path = '/data/config.yaml'
elif not path:
path = get_config_dir()
else:
path = Path(path).expanduser()
self._path = path
self.load()
self._path = Path(path).expanduser()
self.reset()
def __setitem__(self, key, value):
if key in {'database', 'hostname', 'port', 'username', 'password'}:
key = f'{self.dbtype}_{key}'
if key in ['blocked_instances', 'blocked_software', 'whitelist']:
assert isinstance(value, (list, set, tuple))
if (self.is_docker and key in {'general_host', 'general_port'}) or value == '__DEFAULT__':
value = DEFAULTS[key]
elif key in {'general_port', 'database_min_connections', 'database_max_connections'}:
elif key in ['port', 'workers', 'json_cache', 'timeout']:
if not isinstance(value, int):
value = int(value)
elif key == 'sqlite_database':
if not isinstance(value, Path):
value = Path(value)
elif key == 'whitelist_enabled':
if not isinstance(value, bool):
value = boolean(value)
dict.__setitem__(self, key, value)
super().__setitem__(key, value)
@property
def dbconfig(self):
config = {
'type': self['database_type'],
'min_conn': self['database_min_connections'],
'max_conn': self['database_max_connections']
}
if self.dbtype == 'sqlite':
if not self['sqlite_database'].is_absolute():
config['database'] = self.path.with_name(str(self['sqlite_database'])).resolve()
else:
config['database'] = self['sqlite_database'].resolve()
else:
for key, value in self.items():
cat, name = key.split('_', 1)
if self.dbtype == cat:
config[name] = value
return config
@cached_property
def is_docker(self):
return bool(os.getenv('DOCKER_RUNNING'))
def db(self):
return Path(self['db']).expanduser().resolve()
@property
@ -137,29 +60,6 @@ class Config(AppBase, dict):
return self._path
## General config
@property
def host(self):
return self['general_host']
@property
def listen(self):
return self['general_listen']
@property
def port(self):
return self['general_port']
## Database config
@property
def dbtype(self):
return self['database_type']
## AP URLs
@property
def actor(self):
return f'https://{self.host}/actor'
@ -175,12 +75,117 @@ class Config(AppBase, dict):
return f'{self.actor}#main-key'
@cached_property
def is_docker(self):
return bool(os.environ.get('DOCKER_RUNNING'))
def reset(self):
self.clear()
self.update(DEFAULTS)
self.update({
'db': str(self._path.parent.joinpath(f'{self._path.stem}.jsonld')),
'listen': '0.0.0.0',
'port': 8080,
'note': 'Make a note about your instance here.',
'push_limit': 512,
'json_cache': 1024,
'timeout': 10,
'workers': 0,
'host': 'relay.example.com',
'whitelist_enabled': False,
'blocked_software': [],
'blocked_instances': [],
'whitelist': []
})
def ban_instance(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
if self.is_banned(instance):
return False
self.blocked_instances.append(instance)
return True
def unban_instance(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
try:
self.blocked_instances.remove(instance)
return True
except:
return False
def ban_software(self, software):
if self.is_banned_software(software):
return False
self.blocked_software.append(software)
return True
def unban_software(self, software):
try:
self.blocked_software.remove(software)
return True
except:
return False
def add_whitelist(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
if self.is_whitelisted(instance):
return False
self.whitelist.append(instance)
return True
def del_whitelist(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
try:
self.whitelist.remove(instance)
return True
except:
return False
def is_banned(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
return instance in self.blocked_instances
def is_banned_software(self, software):
if not software:
return False
return software.lower() in self.blocked_software
def is_whitelisted(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
return instance in self.whitelist
def load(self):
self.reset()
options = {}
try:
@ -196,21 +201,45 @@ class Config(AppBase, dict):
except FileNotFoundError:
return False
for key, value in DEFAULTS.items():
cat, name = key.split('_', 1)
self[key] = config.get(cat, {}).get(name, DEFAULTS[key])
if not config:
return False
for key, value in config.items():
if key in ['ap']:
for k, v in value.items():
if k not in self:
continue
self[k] = v
continue
elif key not in self:
continue
self[key] = value
if self.host.endswith('example.com'):
return False
return True
def save(self):
config = {key: {} for key in CATEGORY_NAMES}
config = {
# just turning config.db into a string is good enough for now
'db': str(self.db),
'listen': self.listen,
'port': self.port,
'note': self.note,
'push_limit': self.push_limit,
'workers': self.workers,
'json_cache': self.json_cache,
'timeout': self.timeout,
'ap': {key: self[key] for key in APKEYS}
}
for key, value in self.items():
cat, name = key.split('_', 1)
if isinstance(value, Path):
value = str(value)
config[cat][name] = value
with open(self.path, 'w') as fd:
with open(self._path, 'w') as fd:
yaml.dump(config, fd, sort_keys=False)
return config

197
relay/database.py Normal file
View file

@ -0,0 +1,197 @@
import aputils
import asyncio
import json
import logging
import traceback
from urllib.parse import urlparse
class RelayDatabase(dict):
def __init__(self, config):
dict.__init__(self, {
'relay-list': {},
'private-key': None,
'follow-requests': {},
'version': 1
})
self.config = config
self.signer = None
@property
def hostnames(self):
return tuple(self['relay-list'].keys())
@property
def inboxes(self):
return tuple(data['inbox'] for data in self['relay-list'].values())
def load(self):
new_db = True
try:
with self.config.db.open() as fd:
data = json.load(fd)
self['version'] = data.get('version', None)
self['private-key'] = data.get('private-key')
if self['version'] == None:
self['version'] = 1
if 'actorKeys' in data:
self['private-key'] = data['actorKeys']['privateKey']
for item in data.get('relay-list', []):
domain = urlparse(item).hostname
self['relay-list'][domain] = {
'domain': domain,
'inbox': item,
'followid': None
}
else:
self['relay-list'] = data.get('relay-list', {})
for domain, instance in self['relay-list'].items():
if self.config.is_banned(domain) or (self.config.whitelist_enabled and not self.config.is_whitelisted(domain)):
self.del_inbox(domain)
continue
if not instance.get('domain'):
instance['domain'] = domain
new_db = False
except FileNotFoundError:
pass
except json.decoder.JSONDecodeError as e:
if self.config.db.stat().st_size > 0:
raise e from None
if not self['private-key']:
logging.info("No actor keys present, generating 4096-bit RSA keypair.")
self.signer = aputils.Signer.new(self.config.keyid, size=4096)
self['private-key'] = self.signer.export()
else:
self.signer = aputils.Signer(self['private-key'], self.config.keyid)
self.save()
return not new_db
def save(self):
with self.config.db.open('w') as fd:
json.dump(self, fd, indent=4)
def get_inbox(self, domain, fail=False):
if domain.startswith('http'):
domain = urlparse(domain).hostname
inbox = self['relay-list'].get(domain)
if inbox:
return inbox
if fail:
raise KeyError(domain)
def add_inbox(self, inbox, followid=None, software=None):
assert inbox.startswith('https'), 'Inbox must be a url'
domain = urlparse(inbox).hostname
instance = self.get_inbox(domain)
if instance:
if followid:
instance['followid'] = followid
if software:
instance['software'] = software
return instance
self['relay-list'][domain] = {
'domain': domain,
'inbox': inbox,
'followid': followid,
'software': software
}
logging.verbose(f'Added inbox to database: {inbox}')
return self['relay-list'][domain]
def del_inbox(self, domain, followid=None, fail=False):
data = self.get_inbox(domain, fail=False)
if not data:
if fail:
raise KeyError(domain)
return False
if not data['followid'] or not followid or data['followid'] == followid:
del self['relay-list'][data['domain']]
logging.verbose(f'Removed inbox from database: {data["inbox"]}')
return True
if fail:
raise ValueError('Follow IDs do not match')
logging.debug(f'Follow ID does not match: db = {data["followid"]}, object = {followid}')
return False
def get_request(self, domain, fail=True):
if domain.startswith('http'):
domain = urlparse(domain).hostname
try:
return self['follow-requests'][domain]
except KeyError as e:
if fail:
raise e
def add_request(self, actor, inbox, followid):
domain = urlparse(inbox).hostname
try:
request = self.get_request(domain)
request['followid'] = followid
except KeyError:
pass
self['follow-requests'][domain] = {
'actor': actor,
'inbox': inbox,
'followid': followid
}
def del_request(self, domain):
if domain.startswith('http'):
domain = urlparse(inbox).hostname
del self['follow-requests'][domain]
def distill_inboxes(self, message):
src_domains = {
message.domain,
urlparse(message.objectid).netloc
}
for domain, instance in self['relay-list'].items():
if domain not in src_domains:
yield instance['inbox']

View file

@ -1,17 +0,0 @@
import tinysql
from .base import DEFAULT_CONFIG, RELAY_SOFTWARE, TABLES
from .connection import Connection
from .rows import ROWS
class Database(tinysql.Database):
def __init__(self, **config):
tinysql.Database.__init__(self, **config,
connection_class = Connection,
row_classes = ROWS
)
def create(self):
self.create_database(TABLES)

View file

@ -1,67 +0,0 @@
from tinysql import Column, Table
DEFAULT_CONFIG = {
'description': ('str', 'Make a note about your relay here'),
'http_timeout': ('int', 10),
'json_cache': ('int', 1024),
'log_level': ('str', 'INFO'),
'name': ('str', 'ActivityRelay'),
'privkey': ('str', ''),
'push_limit': ('int', 512),
'require_approval': ('bool', False),
'version': ('int', 20221211),
'whitelist': ('bool', False),
'workers': ('int', 8)
}
RELAY_SOFTWARE = [
'activity-relay', # https://github.com/yukimochi/Activity-Relay
'activityrelay', # https://git.pleroma.social/pleroma/relay
'aoderelay', # https://git.asonix.dog/asonix/relay
'feditools-relay' # https://git.ptzo.gdn/feditools/relay
]
TABLES = [
Table('config',
Column('key', 'text', unique=True, nullable=False, primary_key=True),
Column('value', 'text')
),
Table('instances',
Column('id', 'serial'),
Column('domain', 'text', unique=True, nullable=False),
Column('actor', 'text'),
Column('inbox', 'text', nullable=False),
Column('followid', 'text'),
Column('software', 'text'),
Column('note', 'text'),
Column('joined', 'datetime', nullable=False),
Column('updated', 'datetime')
),
Table('whitelist',
Column('id', 'serial'),
Column('domain', 'text', unique=True),
Column('created', 'datetime', nullable=False)
),
Table('bans',
Column('id', 'serial'),
Column('name', 'text', unique=True),
Column('note', 'text'),
Column('type', 'text', nullable=False),
Column('created', 'datetime', nullable=False)
),
Table('users',
Column('id', 'serial'),
Column('handle', 'text', unique=True, nullable=False),
Column('domain', 'text', nullable=False),
Column('api_token', 'text'),
Column('created', 'datetime', nullable=False),
Column('updated', 'datetime')
),
Table('tokens',
Column('id', 'text', unique=True, nullable=False, primary_key=True),
Column('userid', 'integer', nullable=False),
Column('created', 'datetime', nullable=False),
Column('updated', 'datetime')
)
]

View file

@ -1,239 +0,0 @@
import tinysql
from datetime import datetime
from urllib.parse import urlparse
from .base import DEFAULT_CONFIG
from ..misc import DotDict
class Connection(tinysql.ConnectionMixin):
## Misc methods
def accept_request(self, domain):
row = self.get_request(domain)
if not row:
raise KeyError(domain)
data = {'joined': datetime.now()}
self.update('instances', data, id=row.id)
def distill_inboxes(self, message):
src_domains = {
message.domain,
urlparse(message.objectid).netloc
}
for instance in self.get_instances():
if instance.domain not in src_domains:
yield instance.inbox
## Delete methods
def delete_ban(self, type, name):
row = self.get_ban(type, name)
if not row:
raise KeyError(name)
self.delete('bans', id=row.id)
def delete_instance(self, domain):
row = self.get_instance(domain)
if not row:
raise KeyError(domain)
self.delete('instances', id=row.id)
def delete_whitelist(self, domain):
row = self.get_whitelist_domain(domain)
if not row:
raise KeyError(domain)
self.delete('whitelist', id=row.id)
## Get methods
def get_ban(self, type, name):
if type not in {'software', 'domain'}:
raise ValueError('Ban type must be "software" or "domain"')
return self.select('bans', name=name, type=type).one()
def get_bans(self, type):
if type not in {'software', 'domain'}:
raise ValueError('Ban type must be "software" or "domain"')
return self.select('bans', type=type).all()
def get_config(self, key):
if key not in DEFAULT_CONFIG:
raise KeyError(key)
row = self.select('config', key=key).one()
if not row:
return DEFAULT_CONFIG[key][1]
return row.value
def get_config_all(self):
rows = self.select('config').all()
config = DotDict({row.key: row.value for row in rows})
for key, data in DEFAULT_CONFIG.items():
if key not in config:
config[key] = data[1]
return config
def get_hostnames(self):
return tuple(row.domain for row in self.get_instances())
def get_instance(self, data):
if data.startswith('http') and '#' in data:
data = data.split('#', 1)[0]
query = 'SELECT * FROM instances WHERE domain = :data OR actor = :data OR inbox = :data'
row = self.execute(query, dict(data=data), table='instances').one()
return row if row and row.joined else None
def get_instances(self):
query = 'SELECT * FROM instances WHERE joined IS NOT NULL'
query += ' ORDER BY domain ASC'
return self.execute(query, table='instances').all()
def get_request(self, domain):
for instance in self.get_requests():
if instance.domain == domain:
return instance
raise KeyError(domain)
def get_requests(self):
query = 'SELECT * FROM instances WHERE joined IS NULL ORDER BY domain ASC'
return self.execute(query, table='instances').all()
def get_whitelist(self):
return self.select('whitelist').all()
def get_whitelist_domain(self, domain):
return self.select('whitelist', domain=domain).one()
## Put methods
def put_ban(self, type, name, note=None):
if type not in {'software', 'domain'}:
raise ValueError('Ban type must be "software" or "domain"')
row = self.select('bans', name=name, type=type).one()
if row:
if note == None:
raise KeyError(name)
data = {'note': note}
self.update('bans', data, id=row.id)
return
self.insert('bans', {
'name': name,
'type': type,
'note': note,
'created': datetime.now()
})
def put_config(self, key, value='__DEFAULT__'):
if key not in DEFAULT_CONFIG:
raise KeyError(key)
if value == '__DEFAULT__':
value = DEFAULT_CONFIG[key][1]
elif key == 'log_level' and not getattr(logging, value.upper(), False):
raise KeyError(value)
row = self.select('config', key=key).one()
if row:
self.update('config', {'value': value}, key=key)
return
self.insert('config', {
'key': key,
'value': value
})
def put_instance(self, domain, actor=None, inbox=None, followid=None, software=None, actor_data=None, note=None, accept=True):
new_data = {
'actor': actor,
'inbox': inbox,
'followid': followid,
'software': software,
'note': note
}
if actor_data:
new_data['actor_data'] = dict(actor_data)
new_data = {key: value for key, value in new_data.items() if value != None}
instance = self.get_instance(domain)
if instance:
if not new_data:
raise KeyError(domain)
instance.update(new_data)
self.update('instances', new_data, id=instance.id)
return instance
if not inbox:
raise ValueError('Inbox must be included in instance data')
if accept:
new_data['joined'] = datetime.now()
new_data['domain'] = domain
self.insert('instances', new_data)
return self.get_instance(domain)
def put_instance_actor(self, actor, nodeinfo=None, accept=True):
data = {
'domain': actor.domain,
'actor': actor.id,
'inbox': actor.shared_inbox,
'actor_data': actor,
'accept': accept,
'software': nodeinfo.sw_name if nodeinfo else None
}
return self.put_instance(**data)
def put_whitelist(self, domain):
if self.get_whitelist_domain(domain):
raise KeyError(domain)
self.insert('whitelist', {
'domain': domain,
'created': datetime.now()
})

View file

@ -1,35 +0,0 @@
import json
from tinysql import Row
from .base import DEFAULT_CONFIG
from ..misc import DotDict, boolean
ROWS = []
def register(cls):
ROWS.append(cls)
return cls
@register
class ConfigRow(Row):
__table__ = 'config'
@property
def value(self):
type = DEFAULT_CONFIG[self.key][0]
if type == 'int':
return int(self['value'])
elif type == 'bool':
return boolean(self['value'])
elif type == 'list':
return json.loads(self['value'])
elif type == 'json':
return DotDict.parse(self['value'])
return self['value']

View file

@ -8,14 +8,11 @@ from aputils import Nodeinfo, WellKnownNodeinfo
from datetime import datetime
from cachetools import LRUCache
from json.decoder import JSONDecodeError
from urllib.error import HTTPError
from urllib.parse import urlparse
from urllib.request import Request, urlopen
from . import __version__
from .misc import (
MIMETYPES,
AppBase,
DotDict,
Message
)
@ -32,8 +29,9 @@ class Cache(LRUCache):
self.__maxsize = int(value)
class HttpClient(AppBase):
def __init__(self, limit=100, timeout=10, cache_size=1024):
class HttpClient:
def __init__(self, database, limit=100, timeout=10, cache_size=1024):
self.database = database
self.cache = Cache(cache_size)
self.cfg = {'limit': limit, 'timeout': timeout}
self._conn = None
@ -59,16 +57,8 @@ class HttpClient(AppBase):
return self.cfg['timeout']
def setup(self):
with self.database.session as s:
config = s.get_config_all()
self.client.cfg['limit'] = config.push_limit
self.client.cfg['timeout'] = config.http_timeout
self.client.cache.set_maxsize(config.json_cache)
async def open(self):
if self._session and self._session._loop.is_running():
if self._session:
return
self._conn = TCPConnector(
@ -107,7 +97,7 @@ class HttpClient(AppBase):
headers = {}
if sign_headers:
headers.update(self.signer.sign_headers('GET', url, algorithm='original'))
headers.update(self.database.signer.sign_headers('GET', url, algorithm='original'))
try:
logging.verbose(f'Fetching resource: {url}')
@ -154,35 +144,37 @@ class HttpClient(AppBase):
traceback.print_exc()
async def post(self, inbox, message):
async def post(self, url, message):
await self.open()
with self.database.session as s:
instance = s.get_instance(inbox)
instance = self.database.get_inbox(url)
## Using the old algo by default is probably a better idea right now
if instance and instance['software'] in {'mastodon'}:
if instance and instance.get('software') in {'mastodon'}:
algorithm = 'hs2019'
else:
algorithm = 'original'
headers = {'Content-Type': 'application/activity+json'}
headers.update(self.signer.sign_headers('POST', inbox, message, algorithm=algorithm))
headers.update(self.database.signer.sign_headers('POST', url, message, algorithm=algorithm))
try:
logging.verbose(f'Sending "{message.type}" to {inbox}')
logging.verbose(f'Sending "{message.type}" to {url}')
async with self._session.post(inbox, headers=headers, data=message.to_json()) as resp:
async with self._session.post(url, headers=headers, data=message.to_json()) as resp:
## Not expecting a response, so just return
if resp.status in {200, 202}:
return logging.verbose(f'Successfully sent "{message.type}" to {inbox}')
return logging.verbose(f'Successfully sent "{message.type}" to {url}')
logging.verbose(f'Received error when pushing to {inbox}: {resp.status}')
logging.verbose(f'Received error when pushing to {url}: {resp.status}')
return logging.verbose(await resp.read()) # change this to debug
except (ClientConnectorError, ServerTimeoutError):
logging.verbose(f'Failed to connect to {inbox}')
except ClientSSLError:
logging.warning(f'SSL error when pushing to {urlparse(url).netloc}')
except (AsyncTimeoutError, ClientConnectionError):
logging.warning(f'Failed to connect to {urlparse(url).netloc} for message push')
## prevent workers from being brought down
except Exception as e:
@ -215,18 +207,16 @@ class HttpClient(AppBase):
return await self.get(nodeinfo_url, loads=Nodeinfo.new_from_json) or False
## http client methods can't be called directly from manage.py,
## so here's some wrapper functions
async def get(*args, **kwargs):
async with HttpClient() as client:
async def get(database, *args, **kwargs):
async with HttpClient(database) as client:
return await client.get(*args, **kwargs)
async def post(*args, **kwargs):
async with HttpClient() as client:
async def post(database, *args, **kwargs):
async with HttpClient(database) as client:
return await client.post(*args, **kwargs)
async def fetch_nodeinfo(*args, **kwargs):
async with HttpClient() as client:
async def fetch_nodeinfo(database, *args, **kwargs):
async with HttpClient(database) as client:
return await client.fetch_nodeinfo(*args, **kwargs)

View file

@ -4,16 +4,6 @@ import os
from pathlib import Path
LEVELS = {
'critical': logging.CRITICAL,
'error': logging.ERROR,
'warning': logging.WARNING,
'info': logging.INFO,
'verbose': 15,
'debug': logging.DEBUG
}
## Add the verbose logging level
def verbose(message, *args, **kwargs):
if not logging.root.isEnabledFor(logging.VERBOSE):
@ -25,6 +15,10 @@ setattr(logging, 'verbose', verbose)
setattr(logging, 'VERBOSE', 15)
logging.addLevelName(15, 'VERBOSE')
## Get log level and file from environment if possible
env_log_level = os.environ.get('LOG_LEVEL', 'INFO').upper()
try:
env_log_file = Path(os.environ.get('LOG_FILE')).expanduser().resolve()
@ -32,6 +26,14 @@ except TypeError:
env_log_file = None
## Make sure the level from the environment is valid
try:
log_level = getattr(logging, env_log_level)
except AttributeError:
log_level = logging.INFO
## Set logging config
handlers = [logging.StreamHandler()]
@ -39,11 +41,7 @@ if env_log_file:
handlers.append(logging.FileHandler(env_log_file))
logging.basicConfig(
level = logging.INFO,
level = log_level,
format = "[%(asctime)s] %(levelname)s: %(message)s",
handlers = handlers
)
def set_level(level):
logging.getLogger().setLevel(LEVELS[level.lower()])

View file

@ -1,39 +1,29 @@
import Crypto
import asyncio
import click
import json
import logging
import platform
import yaml
from datetime import datetime
from urllib.parse import urlparse
from . import __version__
from . import misc, __version__
from . import http_client as http
from .application import Application
from .database import DEFAULT_CONFIG, RELAY_SOFTWARE
from .http_client import get, post, fetch_nodeinfo
from .misc import Message, boolean, check_open_port
from .config import RELAY_SOFTWARE
app = None
CONFIG_IGNORE = {
'privkey',
'version'
}
CONFIG_IGNORE = {'blocked_software', 'blocked_instances', 'whitelist'}
@click.group('cli', context_settings={'show_default': True}, invoke_without_command=True)
@click.option('--config', '-c', help='path to the relay\'s config')
@click.option('--config', '-c', default='relay.yaml', help='path to the relay\'s config')
@click.version_option(version=__version__, prog_name='ActivityRelay')
@click.pass_context
def cli(ctx, config):
global app
app = Application(config)
if ctx.invoked_subcommand != 'convert':
app.setup()
if not ctx.invoked_subcommand:
if app.config.host.endswith('example.com'):
cli_setup.callback()
@ -42,76 +32,12 @@ def cli(ctx, config):
cli_run.callback()
@cli.command('convert')
@click.option('--old-config', '-o', help='path to the old relay config')
def cli_convert(old_config):
'Convert an old relay.yaml and relay.jsonld to the the new formats'
with open(old_config or 'relay.yaml') as fd:
config = yaml.load(fd.read(), Loader=yaml.SafeLoader)
ap = config.get('ap', {})
with open(config.get('db', 'relay.jsonld')) as fd:
db = json.load(fd)
app.config['general_host'] = ap.get('host', '__DEFAULT__')
app.config['general_listen'] = config.get('listen', '__DEFAULT__')
app.config['general_port'] = config.get('port', '__DEFAULT__')
with app.database.session as s:
s.put_config('description', config.get('note', '__DEFAULT__'))
s.put_config('push_limit', config.get('push_limit', '__DEFAULT__'))
s.put_config('json_cache', config.get('json_cache', '__DEFAULT__'))
s.put_config('workers', config.get('workers', '__DEFAULT__'))
s.put_config('http_timeout', config.get('timeout', '__DEFAULT__'))
s.put_config('privkey', db.get('private-key'))
for name in ap.get('blocked_software', []):
try: s.put_ban('software', name)
except KeyError: print(f'Already banned software: {name}')
for name in ap.get('blocked_instances', []):
try: s.put_ban('domain', name)
except KeyError: print(f'Already banned instance: {name}')
for name in ap.get('whitelist', []):
try: s.put_whitelist(name)
except KeyError: print(f'Already whitelisted domain: {name}')
for instance in db.get('relay-list', {}).values():
domain = instance['domain']
software = instance.get('software')
actor = None
if software == 'mastodon':
actor = f'https://{domain}/actor'
elif software in {'pleroma', 'akkoma'}:
actor = f'https://{domain}/relay'
s.put_instance(
domain = domain,
inbox = instance.get('inbox'),
software = software,
actor = actor,
followid = instance.get('followid'),
accept = True
)
app.config.save()
print('Config and database converted :3')
@cli.command('setup')
def cli_setup():
'Generate a new config'
while True:
app.config['general_host'] = click.prompt(
'What domain will the relay be hosted on?',
default = app.config.host
)
app.config.host = click.prompt('What domain will the relay be hosted on?', default=app.config.host)
if not app.config.host.endswith('example.com'):
break
@ -119,85 +45,14 @@ def cli_setup():
click.echo('The domain must not be example.com')
if not app.config.is_docker:
app.config['general_listen'] = click.prompt(
'Which address should the relay listen on?',
default = app.config.listen
)
app.config.listen = click.prompt('Which address should the relay listen on?', default=app.config.listen)
while True:
app.config['general_port'] = click.prompt(
'What TCP port should the relay listen on?',
default = app.config.port,
type = int
)
app.config.port = click.prompt('What TCP port should the relay listen on?', default=app.config.port, type=int)
break
app.config['database_type'] = click.prompt(
'What database backend would you like to use for the relay?',
default = app.config.dbtype,
type = click.Choice(['sqlite', 'postgresql', 'mysql']),
show_choices = True
)
if app.config.dbtype == 'sqlite':
app.config['sqlite_database'] = click.prompt(
'Where would you like to store your database file? Relative paths are relative to the config file location.',
default = app.config['sqlite_database']
)
else:
dbconfig = app.config.dbconfig
app.config.hostname = click.prompt(
'What address is your database listening on?',
default = dbconfig.hostname
) or None
app.config.port = click.prompt(
'What port is your database listening on?',
default = dbconfig.port
) or None
app.config.database = click.prompt(
'What would you like the name of the database be?',
default = dbconfig.database
) or None
app.config.username = click.prompt(
'Which user will be connecting to the database?',
default = dbconfig.username
) or None
app.config.password = click.prompt(
'What is the database user\'s password?',
default = dbconfig.password
) or None
app.config.save()
with app.database.session as s:
s.put_config('name', click.prompt(
'What do you want to name your relay?',
default = s.get_config('name')
))
s.put_config('description', click.prompt(
'Provide a small description of your relay. This will be on the front page',
default = s.get_config('description')
))
s.put_config('whitelist', click.prompt(
'Enable the whitelist?',
default = s.get_config('whitelist'),
type = boolean
))
s.put_config('require_approval', click.prompt(
'Require instances to be approved when following?',
default = s.get_config('require_approval'),
type = boolean
))
if not app.config.is_docker and click.confirm('Relay all setup! Would you like to run it now?'):
cli_run.callback()
@ -206,9 +61,8 @@ def cli_setup():
def cli_run():
'Run the relay'
with app.database.session as s:
if not s.get_config('privkey') or app.config.host.endswith('example.com'):
return click.echo('Relay is not set up. Please run "activityrelay setup".')
if app.config.host.endswith('example.com'):
return click.echo('Relay is not set up. Please edit your relay config or run "activityrelay setup".')
vers_split = platform.python_version().split('.')
pip_command = 'pip3 uninstall pycrypto && pip3 install pycryptodome'
@ -222,7 +76,7 @@ def cli_run():
click.echo('Warning: PyCrypto is old and should be replaced with pycryptodome')
return click.echo(pip_command)
if not check_open_port(app.config.listen, app.config.port):
if not misc.check_open_port(app.config.listen, app.config.port):
return click.echo(f'Error: A server is already running on port {app.config.port}')
app.run()
@ -241,28 +95,22 @@ def cli_config_list():
click.echo('Relay Config:')
with app.database.session as s:
config = s.get_config_all()
for key in DEFAULT_CONFIG.keys():
if key in CONFIG_IGNORE:
continue
keystr = f'{key}:'.ljust(20)
click.echo(f'- {keystr} {config[key]}')
for key, value in app.config.items():
if key not in CONFIG_IGNORE:
key = f'{key}:'.ljust(20)
click.echo(f'- {key} {value}')
@cli_config.command('set')
@click.argument('key')
@click.argument('value', nargs=-1)
@click.argument('value')
def cli_config_set(key, value):
'Set a config value'
with app.database.session as s:
s.put_config(key, ' '.join(value))
value = s.get_config(key)
app.config[key] = value
app.config.save()
print(f'{key}: {value}')
print(f'{key}: {app.config[key]}')
@cli.group('inbox')
@ -277,9 +125,8 @@ def cli_inbox_list():
click.echo('Connected to the following instances or relays:')
with app.database.session as s:
for instance in s.get_instances():
click.echo(f'- {instance.inbox}')
for inbox in app.database.inboxes:
click.echo(f'- {inbox}')
@cli_inbox.command('follow')
@ -287,6 +134,9 @@ def cli_inbox_list():
def cli_inbox_follow(actor):
'Follow an actor (Relay must be running)'
if app.config.is_banned(actor):
return click.echo(f'Error: Refusing to follow banned actor: {actor}')
if not actor.startswith('http'):
domain = actor
actor = f'https://{actor}/actor'
@ -294,32 +144,24 @@ def cli_inbox_follow(actor):
else:
domain = urlparse(actor).hostname
with app.database.session as s:
if s.get_ban('domain', domain):
return click.echo(f'Error: Refusing to follow banned actor: {actor}')
try:
inbox_data = app.database['relay-list'][domain]
inbox = inbox_data['inbox']
instance = s.get_instance(domain)
if not instance:
actor_data = asyncio.run(get(actor, sign_headers=True))
except KeyError:
actor_data = asyncio.run(http.get(app.database, actor, sign_headers=True))
if not actor_data:
return click.echo(f'Failed to fetch actor: {actor}')
inbox = actor_data.shared_inbox
else:
inbox = instance.inbox
if instance.actor:
actor = instance.actor
message = Message.new_follow(
message = misc.Message.new_follow(
host = app.config.host,
actor = actor
)
asyncio.run(post(inbox, message))
asyncio.run(http.post(app.database, inbox, message))
click.echo(f'Sent follow message to actor: {actor}')
@ -328,8 +170,6 @@ def cli_inbox_follow(actor):
def cli_inbox_unfollow(actor):
'Unfollow an actor (Relay must be running)'
followid = None
if not actor.startswith('http'):
domain = actor
actor = f'https://{actor}/actor'
@ -337,176 +177,68 @@ def cli_inbox_unfollow(actor):
else:
domain = urlparse(actor).hostname
with app.database.session as s:
instance = s.get_instance(domain)
if not instance:
actor_data = asyncio.run(get(actor, sign_headers=True))
if not actor_data:
return click.echo(f'Failed to fetch actor: {actor}')
inbox = actor_data.shared_inbox
else:
inbox = instance.inbox
followid = instance.followid
if instance.actor:
actor = instance.actor
if followid:
message = Message.new_unfollow(
try:
inbox_data = app.database['relay-list'][domain]
inbox = inbox_data['inbox']
message = misc.Message.new_unfollow(
host = app.config.host,
actor = actor,
follow = followid
follow = inbox_data['followid']
)
else:
except KeyError:
actor_data = asyncio.run(http.get(app.database, actor, sign_headers=True))
inbox = actor_data.shared_inbox
message = misc.Message.new_unfollow(
host = app.config.host,
actor = actor,
follow = {
'type': 'Follow',
'object': actor,
'actor': app.config.actor
'actor': f'https://{app.config.host}/actor'
}
)
asyncio.run(post(inbox, message))
asyncio.run(http.post(app.database, inbox, message))
click.echo(f'Sent unfollow message to: {actor}')
@cli_inbox.command('add')
@click.argument('actor')
def cli_inbox_add(actor):
'Add an instance to the database'
@click.argument('inbox')
def cli_inbox_add(inbox):
'Add an inbox to the database'
if not actor.startswith('http'):
domain = actor
actor = f'https://{actor}/inbox'
if not inbox.startswith('http'):
inbox = f'https://{inbox}/inbox'
else:
domain = urlparse(actor).hostname
if app.config.is_banned(inbox):
return click.echo(f'Error: Refusing to add banned inbox: {inbox}')
with app.database.session as s:
data = {
'domain': domain,
'actor': actor,
'inbox': f'https://{domain}/inbox'
}
if app.database.get_inbox(inbox):
return click.echo(f'Error: Inbox already in database: {inbox}')
if s.get_instance(domain):
return click.echo(f'Error: Instance already in database: {domain}')
app.database.add_inbox(inbox)
app.database.save()
if s.get_ban('domain', domain):
return click.echo(f'Error: Refusing to add banned domain: {domain}')
nodeinfo = asyncio.run(fetch_nodeinfo(domain))
if nodeinfo:
if s.get_ban('software', nodeinfo.sw_name):
return click.echo(f'Error: Refusing to add banned software: {nodeinfo.sw_name}')
data['software'] = nodeinfo.sw_name
actor_data = asyncio.run(get(actor, sign_headers=True))
if actor_data:
instance = s.put_instance_actor(actor, nodeinfo)
else:
instance = s.put_instance(**data)
click.echo(f'Added instance to the database: {instance.domain}')
click.echo(f'Added inbox to the database: {inbox}')
@cli_inbox.command('remove')
@click.argument('domain')
def cli_inbox_remove(domain):
@click.argument('inbox')
def cli_inbox_remove(inbox):
'Remove an inbox from the database'
if domain.startswith('http'):
domain = urlparse(domain).hostname
with app.database.session as s:
try:
s.delete_instance(domain)
click.echo(f'Removed inbox from the database: {domain}')
dbinbox = app.database.get_inbox(inbox, fail=True)
except KeyError:
return click.echo(f'Error: Inbox does not exist: {domain}')
click.echo(f'Error: Inbox does not exist: {inbox}')
return
app.database.del_inbox(dbinbox['domain'])
app.database.save()
@cli.group('request')
def cli_request():
'Manage follow requests'
@cli_request.command('list')
def cli_request_list():
'List all the current follow requests'
click.echo('Follow requests:')
with app.database.session as s:
for row in s.get_requests():
click.echo(f'- {row.domain}')
@cli_request.command('approve')
@click.argument('domain')
def cli_request_approve(domain):
'Approve a follow request'
with app.database.session as s:
try:
instance = s.get_request(domain)
except KeyError:
return click.echo(f'No request for domain exists: {domain}')
data = {'joined': datetime.now()}
s.update('instances', data, id=instance.id)
asyncio.run(post(
instance.inbox,
Message.new_response(
host = app.config.host,
actor = instance.actor,
followid = instance.followid,
accept = True
)
))
return click.echo(f'Accepted follow request for domain: {domain}')
@cli_request.command('deny')
@click.argument('domain')
def cli_request_deny(domain):
'Deny a follow request'
with app.database.session as s:
try:
instance = s.get_request(domain)
except KeyError:
return click.echo(f'No request for domain exists: {domain}')
s.delete_instance(domain)
asyncio.run(post(
instance.inbox,
Message.new_response(
host = app.config.host,
actor = instance.actor,
followid = instance.followid,
accept = False
)
))
return click.echo(f'Denied follow request for domain: {domain}')
click.echo(f'Removed inbox from the database: {inbox}')
@cli.group('instance')
@ -521,50 +253,42 @@ def cli_instance_list():
click.echo('Banned instances or relays:')
with app.database.session as s:
for row in s.get_bans('domain'):
click.echo(f'- {row.name}')
for domain in app.config.blocked_instances:
click.echo(f'- {domain}')
@cli_instance.command('ban')
@click.argument('domain')
def cli_instance_ban(domain):
@click.argument('target')
def cli_instance_ban(target):
'Ban an instance and remove the associated inbox if it exists'
if domain.startswith('http'):
domain = urlparse(domain).hostname
if target.startswith('http'):
target = urlparse(target).hostname
with app.database.session as s:
try:
s.put_ban('domain', domain)
if app.config.ban_instance(target):
app.config.save()
except KeyError:
return click.echo(f'Instance already banned: {domain}')
if app.database.del_inbox(target):
app.database.save()
try:
s.delete_instance(domain)
click.echo(f'Banned instance: {target}')
return
except KeyError:
pass
click.echo(f'Banned instance: {domain}')
click.echo(f'Instance already banned: {target}')
@cli_instance.command('unban')
@click.argument('domain')
def cli_instance_unban(domain):
@click.argument('target')
def cli_instance_unban(target):
'Unban an instance'
if domain.startswith('http'):
domain = urlparse(domain).hostname
if app.config.unban_instance(target):
app.config.save()
with app.database.session as s:
try:
s.delete_ban('domain', domain)
click.echo(f'Unbanned instance: {domain}')
click.echo(f'Unbanned instance: {target}')
return
except KeyError:
click.echo(f'Instance wasn\'t banned: {domain}')
click.echo(f'Instance wasn\'t banned: {target}')
@cli.group('software')
@ -579,9 +303,8 @@ def cli_software_list():
click.echo('Banned software:')
with app.database.session as s:
for row in s.get_bans('software'):
click.echo(f'- {row.name}')
for software in app.config.blocked_software:
click.echo(f'- {software}')
@cli_software.command('ban')
@ -592,26 +315,25 @@ def cli_software_list():
def cli_software_ban(name, fetch_nodeinfo):
'Ban software. Use RELAYS for NAME to ban relays'
with app.database.session as s:
if name == 'RELAYS':
for name in RELAY_SOFTWARE:
s.put_ban('software', name)
app.config.ban_software(name)
app.config.save()
return click.echo('Banned all relay software')
if fetch_nodeinfo:
nodeinfo = asyncio.run(fetch_nodeinfo(name))
nodeinfo = asyncio.run(http.fetch_nodeinfo(app.database, name))
if not nodeinfo:
return click.echo(f'Failed to fetch software name from domain: {name}')
click.echo(f'Failed to fetch software name from domain: {name}')
name = nodeinfo.sw_name
try:
s.put_ban('software', name)
click.echo(f'Banned software: {name}')
if app.config.ban_software(name):
app.config.save()
return click.echo(f'Banned software: {name}')
except KeyError:
click.echo(f'Software already banned: {name}')
@ -623,26 +345,25 @@ def cli_software_ban(name, fetch_nodeinfo):
def cli_software_unban(name, fetch_nodeinfo):
'Ban software. Use RELAYS for NAME to unban relays'
with app.database.session as s:
if name == 'RELAYS':
for name in RELAY_SOFTWARE:
s.put_ban('software', name)
app.config.unban_software(name)
app.config.save()
return click.echo('Unbanned all relay software')
if fetch_nodeinfo:
nodeinfo = asyncio.run(fetch_nodeinfo(name))
nodeinfo = asyncio.run(http.fetch_nodeinfo(app.database, name))
if not nodeinfo:
return click.echo(f'Failed to fetch software name from domain: {name}')
click.echo(f'Failed to fetch software name from domain: {name}')
name = nodeinfo.sw_name
try:
s.put_ban('software', name)
click.echo(f'Unbanned software: {name}')
if app.config.unban_software(name):
app.config.save()
return click.echo(f'Unbanned software: {name}')
except KeyError:
click.echo(f'Software wasn\'t banned: {name}')
@ -656,61 +377,47 @@ def cli_whitelist():
def cli_whitelist_list():
'List all the instances in the whitelist'
click.echo('Current whitelisted domains:')
click.echo('Current whitelisted domains')
with app.database.session as s:
for row in s.get_whitelist():
click.echo(f'- {row.domain}')
for domain in app.config.whitelist:
click.echo(f'- {domain}')
@cli_whitelist.command('add')
@click.argument('domain')
def cli_whitelist_add(domain):
'Add a domain to the whitelist'
@click.argument('instance')
def cli_whitelist_add(instance):
'Add an instance to the whitelist'
with app.database.session as s:
try:
s.put_whitelist(domain)
click.echo(f'Instance added to the whitelist: {domain}')
if not app.config.add_whitelist(instance):
return click.echo(f'Instance already in the whitelist: {instance}')
except KeyError:
return click.echo(f'Instance already in the whitelist: {domain}')
app.config.save()
click.echo(f'Instance added to the whitelist: {instance}')
@cli_whitelist.command('remove')
@click.argument('domain')
def cli_whitelist_remove(domain):
'Remove a domain from the whitelist'
@click.argument('instance')
def cli_whitelist_remove(instance):
'Remove an instance from the whitelist'
with app.database.session as s:
try:
s.delete_whitelist(domain)
click.echo(f'Removed instance from the whitelist: {domain}')
if not app.config.del_whitelist(instance):
return click.echo(f'Instance not in the whitelist: {instance}')
except KeyError:
click.echo(f'Instance not in the whitelist: {domain}')
app.config.save()
if app.config.whitelist_enabled:
if app.database.del_inbox(instance):
app.database.save()
click.echo(f'Removed instance from the whitelist: {instance}')
@cli_whitelist.command('import')
def cli_whitelist_import():
'Add all current inboxes to the whitelist'
with app.database.session as s:
for row in s.get_instances():
try:
s.put_whitelist(row.domain)
click.echo(f'Instance added to the whitelist: {row.domain}')
except KeyError:
click.echo(f'Instance already in the whitelist: {row.domain}')
@cli_whitelist.command('clear')
def cli_whitelist_clear():
'Clear all items out of the whitelist'
with app.database.session as s:
s.delete('whitelist')
for domain in app.database.hostnames:
cli_whitelist_add.callback(domain)
def main():

View file

@ -36,9 +36,6 @@ def set_app(new_app):
def boolean(value):
if isinstance(value, bytes):
value = str(value, 'utf-8')
if isinstance(value, str):
if value.lower() in ['on', 'y', 'yes', 'true', 'enable', 'enabled', '1']:
return True
@ -66,7 +63,7 @@ def boolean(value):
return value.__bool__()
except AttributeError:
raise TypeError(f'Cannot convert object of type "{type(value).__name__}"')
raise TypeError(f'Cannot convert object of type "{clsname(value)}"')
def check_open_port(host, port):
@ -81,32 +78,6 @@ def check_open_port(host, port):
return False
class AppBase:
@property
def app(self):
return app
@property
def client(self):
return app.client
@property
def config(self):
return app.config
@property
def database(self):
return app.database
@property
def signer(self):
return app.signer
class DotDict(dict):
def __init__(self, _data, **kwargs):
dict.__init__(self)
@ -192,15 +163,14 @@ class DotDict(dict):
class Message(DotDict):
@classmethod
def new_actor(cls, host, pubkey, name=None, description=None, locked=False):
def new_actor(cls, host, pubkey, description=None):
return cls({
'@context': 'https://www.w3.org/ns/activitystreams',
'id': f'https://{host}/actor',
'type': 'Application',
'preferredUsername': 'relay',
'name': name or 'ActivityRelay',
'name': 'ActivityRelay',
'summary': description or 'ActivityRelay bot',
'manuallyApprovesFollowers': locked,
'followers': f'https://{host}/followers',
'following': f'https://{host}/following',
'inbox': f'https://{host}/inbox',
@ -340,3 +310,31 @@ class Response(AiohttpResponse):
@location.setter
def location(self, value):
self.headers['Location'] = value
class View(AiohttpView):
async def _iter(self):
if self.request.method not in METHODS:
self._raise_allowed_methods()
method = getattr(self, self.request.method.lower(), None)
if method is None:
self._raise_allowed_methods()
return await method(**self.request.match_info)
@property
def app(self):
return self._request.app
@property
def config(self):
return self.app.config
@property
def database(self):
return self.app.database

View file

@ -4,7 +4,6 @@ import logging
from cachetools import LRUCache
from uuid import uuid4
from .database import RELAY_SOFTWARE
from .misc import Message
@ -12,25 +11,20 @@ cache = LRUCache(1024)
def person_check(actor, software):
## pleroma and akkoma use Person for the actor type for some reason
if software in {'akkoma', 'pleroma'} and actor.id != f'https://{actor.domain}/relay':
return True
## pleroma and akkoma may use Person for the actor type for some reason
if software in {'akkoma', 'pleroma'} and actor.id == f'https://{actor.domain}/relay':
return False
## make sure the actor is an application
elif actor.type != 'Application':
if actor.type != 'Application':
return True
async def handle_relay(request, s):
async def handle_relay(request):
if request.message.objectid in cache:
logging.verbose(f'already relayed {request.message.objectid}')
return
if request.message.get('to') != ['https://www.w3.org/ns/activitystreams#Public']:
logging.verbose('Message was not public')
logging.verbose(request.message.get('to'))
return
message = Message.new_announce(
host = request.config.host,
object = request.message.objectid
@ -39,13 +33,13 @@ async def handle_relay(request, s):
cache[request.message.objectid] = message.id
logging.debug(f'>> relay: {message}')
inboxes = s.distill_inboxes(request.message)
inboxes = request.database.distill_inboxes(request.message)
for inbox in inboxes:
request.app.push_message(inbox, message)
async def handle_forward(request, s):
async def handle_forward(request):
if request.message.id in cache:
logging.verbose(f'already forwarded {request.message.id}')
return
@ -58,72 +52,56 @@ async def handle_forward(request, s):
cache[request.message.id] = message.id
logging.debug(f'>> forward: {message}')
inboxes = s.distill_inboxes(request.message)
inboxes = request.database.distill_inboxes(request.message)
for inbox in inboxes:
request.app.push_message(inbox, message)
async def handle_follow(request, s):
approve = True
async def handle_follow(request):
nodeinfo = await request.app.client.fetch_nodeinfo(request.actor.domain)
software = nodeinfo.sw_name if nodeinfo else None
## reject if the actor isn't whitelisted while the whiltelist is enabled
if s.get_config('whitelist') and not s.get_whitelist(request.actor.domain):
logging.verbose(f'Rejected actor for not being in the whitelist: {request.actor.id}')
approve = False
## reject if software used by actor is banned
if s.get_ban('software', software):
logging.verbose(f'Rejected follow from actor for using specific software: actor={request.actor.id}, software={software}')
approve = False
if request.config.is_banned_software(software):
request.app.push_message(
request.actor.shared_inbox,
Message.new_response(
host = request.config.host,
actor = request.actor.id,
followid = request.message.id,
accept = False
)
)
return logging.verbose(f'Rejected follow from actor for using specific software: actor={request.actor.id}, software={software}')
## reject if the actor is not an instance actor
if person_check(request.actor, software):
logging.verbose(f'Non-application actor tried to follow: {request.actor.id}')
approve = False
if approve:
if not request.instance:
s.put_instance(
domain = request.actor.domain,
actor = request.actor.id,
inbox = request.actor.shared_inbox,
actor_data = request.actor,
software = software,
followid = request.message.id,
accept = not s.get_config('require_approval')
)
if s.get_config('require_approval'):
return
else:
s.put_instance(
domain = request.actor.domain,
followid = request.message.id
)
# Rejects don't seem to work right with mastodon
request.app.push_message(
request.actor.inbox,
request.actor.shared_inbox,
Message.new_response(
host = request.config.host,
actor = request.message.actorid,
actor = request.actor.id,
followid = request.message.id,
accept = approve
accept = False
)
)
## Don't send a follow if the the follow has been rejected
if not approve:
return
return logging.verbose(f'Non-application actor tried to follow: {request.actor.id}')
## Make sure two relays aren't continuously following each other
if software in RELAY_SOFTWARE and not request.instance:
return
request.database.add_inbox(request.actor.shared_inbox, request.message.id, software)
request.database.save()
request.app.push_message(
request.actor.shared_inbox,
Message.new_response(
host = request.config.host,
actor = request.actor.id,
followid = request.message.id,
accept = True
)
)
# Are Akkoma and Pleroma the only two that expect a follow back?
# Ignoring only Mastodon for now
@ -137,24 +115,16 @@ async def handle_follow(request, s):
)
async def handle_undo(request, s):
async def handle_undo(request):
## If the object is not a Follow, forward it
if request.message.object.type != 'Follow':
return await handle_forward(request)
instance_follow = request.instance.followid
message_follow = request.message.object.id
if not request.database.del_inbox(request.actor.domain, request.message.id):
return
if person_check(request.actor, request.instance.software):
return logging.verbose(f'Non-application actor tried to unfollow: {request.actor.id}')
request.database.save()
if instance_follow and instance_follow != message_follow:
return logging.verbose(f'Followid does not match: {instance_follow}, {message_follow}')
s.delete('instances', id=request.instance.id)
logging.verbose(f'Removed inbox: {request.instance.inbox}')
if request.instance.software != 'mastodon':
request.app.push_message(
request.actor.shared_inbox,
Message.new_unfollow(
@ -179,26 +149,12 @@ async def run_processor(request):
if request.message.type not in processors:
return
with request.database.session as s:
if request.instance:
new_data = {}
if not request.instance.software:
logging.verbose(f'Fetching nodeinfo for instance: {request.instance.domain}')
nodeinfo = await request.app.client.fetch_nodeinfo(request.instance.domain)
if request.instance and not request.instance.get('software'):
nodeinfo = await request.app.client.fetch_nodeinfo(request.instance['domain'])
if nodeinfo:
new_data['software'] = nodeinfo.sw_name
if not request.instance.actor:
logging.verbose(f'Fetching actor for instance: {request.instance.domain}')
new_data['actor'] = request.signature.keyid.split('#', 1)[0]
if not request.instance.actor_data:
new_data['actor_data'] = request.actor
if new_data:
s.put_instance(request.actor.domain, **new_data)
request.instance['software'] = nodeinfo.sw_name
request.database.save()
logging.verbose(f'New "{request.message.type}" from actor: {request.actor.id}')
return await processors[request.message.type](request, s)
return await processors[request.message.type](request)

View file

@ -5,7 +5,6 @@ import subprocess
import traceback
from pathlib import Path
from urllib.parse import urlparse
from . import __version__, misc
from .misc import DotDict, Message, Response
@ -25,29 +24,24 @@ if Path(__file__).parent.parent.joinpath('.git').exists():
pass
def register_route(method, *paths):
def register_route(method, path):
def wrapper(func):
for path in paths:
routes.append([method, path, func])
return func
return wrapper
@register_route('GET', '/')
async def home(request, s):
hostnames = s.get_hostnames()
config = s.get_config_all()
targets = '<br>'.join(hostnames)
note = config.description
count = len(hostnames)
async def home(request):
targets = '<br>'.join(request.database.hostnames)
note = request.config.note
count = len(request.database.hostnames)
host = request.config.host
text = f"""
<html><head>
<title>ActivityPub Relay at {host}</title>
<title>SEDI中繼器</title>
<style>
p {{ color: #FFFFFF; font-family: monospace, arial; font-size: 100%; }}
body {{ background-color: #000000; }}
@ -59,41 +53,39 @@ a:hover {{ color: #8AF; }}
<body>
<p>This is an Activity Relay for fediverse instances.</p>
<p>{note}</p>
<p>You may subscribe to this relay with the address: <a href="https://{host}/actor">https://{host}/actor</a></p>
<p>To host your own relay, you may download the code at this address: <a href="https://git.pleroma.social/pleroma/relay">https://git.pleroma.social/pleroma/relay</a></p>
<br><p>List of {count} registered instances:<br>{targets}</p>
<p>Misskey及Mastodon站長請訂閱這個地址<a href="https://{host}/inbox">https://{host}/inbox</a></p>
<p>Pleroma及Friendica站長請訂閱這個地址<a href="https://{host}/actor">https://{host}/actor</a></p>
<p>原始碼<a href="https://git.seediqbale.xyz/pch_xyz/sedi-relay">https://git.seediqbale.xyz/pch_xyz/sedi-relay</a></p>
<p>請我喝杯咖啡<a href="https://buymeacoffee.com/SEDI">https://buymeacoffee.com/SEDI</a></p>
<p>activityrelay v0.2.4</p>
<br><p> {count} 個實例訂閱中<br>{targets}</p>
</body></html>"""
return Response.new(text, ctype='html')
@register_route('GET', '/actor', '/inbox')
async def actor(request, s):
@register_route('GET', '/inbox')
@register_route('GET', '/actor')
async def actor(request):
data = Message.new_actor(
host = request.config.host,
pubkey = request.app.signer.pubkey,
name = s.get_config('name'),
description = s.get_config('description'),
locked = s.get_config('require_approval')
pubkey = request.database.signer.pubkey
)
return Response.new(data, ctype='activity')
@register_route('POST', '/actor', '/inbox')
async def inbox(request, s):
@register_route('POST', '/inbox')
@register_route('POST', '/actor')
async def inbox(request):
config = request.config
database = request.database
## reject if missing signature header
if not request.signature:
logging.verbose('Actor missing signature header')
raise HTTPUnauthorized(body='missing signature')
domain = urlparse(request.signature.keyid).hostname
## reject if actor is banned
if s.get_ban('domain', domain):
logging.verbose(f'Ignored request from banned actor: {domain}')
return Response.new_error(403, 'access denied', 'json')
try:
request['message'] = await request.json(loads=Message.new_from_json)
@ -125,8 +117,17 @@ async def inbox(request, s):
logging.verbose(f'Failed to fetch actor: {request.signature.keyid}')
return Response.new_error(400, 'failed to fetch actor', 'json')
request['instance'] = s.get_instance(request.actor.shared_inbox)
config = s.get_config_all()
request['instance'] = request.database.get_inbox(request['actor'].inbox)
## reject if the actor isn't whitelisted while the whiltelist is enabled
if config.whitelist_enabled and not config.is_whitelisted(request.actor.domain):
logging.verbose(f'Rejected actor for not being in the whitelist: {request.actor.id}')
return Response.new_error(403, 'access denied', 'json')
## reject if actor is banned
if request.config.is_banned(request.actor.domain):
logging.verbose(f'Ignored request from banned actor: {actor.id}')
return Response.new_error(403, 'access denied', 'json')
## reject if the signature is invalid
try:
@ -138,7 +139,7 @@ async def inbox(request, s):
return Response.new_error(401, str(e), 'json')
## reject if activity type isn't 'Follow' and the actor isn't following
if request.message.type != 'Follow' and (not request.instance or not request.instance.joined):
if request.message.type != 'Follow' and not database.get_inbox(request.actor.domain):
logging.verbose(f'Rejected actor for trying to post while not following: {request.actor.id}')
return Response.new_error(401, 'access denied', 'json')
@ -169,16 +170,16 @@ async def webfinger(request):
@register_route('GET', '/nodeinfo/{version:\d.\d\.json}')
async def nodeinfo(request, s):
async def nodeinfo(request):
niversion = request.match_info['version'][:3]
data = dict(
name = 'activityrelay',
version = version,
protocols = ['activitypub'],
open_regs = not s.get_config('whitelist'),
open_regs = not request.config.whitelist_enabled,
users = 1,
metadata = {'peers': s.get_hostnames()}
metadata = {'peers': request.database.hostnames}
)
if niversion == '2.1':

View file

@ -3,4 +3,3 @@ aputils@https://git.barkshark.xyz/barkshark/aputils/archive/0.1.3.tar.gz
cachetools>=5.2.0
click>=8.1.2
pyyaml>=6.0
tinysql[all]@https://git.barkshark.xyz/barkshark/tinysql/archive/0.1.0.tar.gz