Compare commits

...

19 commits

Author SHA1 Message Date
Izalia Mae b73e43a9ff Merge branch 'master' into sqldatabase 2022-12-29 07:28:14 -05:00
Izalia Mae 76c678b215 fix tinysql optional deps 2022-12-26 05:57:45 -05:00
Izalia Mae 4acdfdbfc1 update docs 2022-12-20 08:00:18 -05:00
Izalia Mae 261dce50ab let setup command configure the database 2022-12-20 07:59:58 -05:00
Izalia Mae ed25fcab35 remove print call 2022-12-20 06:17:20 -05:00
Izalia Mae 8eb60cb0f4 split database into sub-module 2022-12-20 06:09:27 -05:00
Izalia Mae be556163c9 only set signal handler on server start and stop 2022-12-20 06:08:17 -05:00
Izalia Mae 4979d598f1 call app.setup first 2022-12-20 06:05:06 -05:00
Izalia Mae 04ae6a8851 remove appdirs dep and add option to set sqlite database path 2022-12-20 06:01:20 -05:00
Izalia Mae e3c4377db6 fix NameError in Connection.delete_instance 2022-12-14 09:03:57 -05:00
Izalia Mae 1d8de63d95 fix AttributeError when fetching an instance by name 2022-12-14 08:39:56 -05:00
Izalia Mae 0322fa567b set log level from database config 2022-12-14 05:31:45 -05:00
Izalia Mae 426adf1117 fix tinysql url 2022-12-14 05:14:17 -05:00
Izalia Mae 3907620f24 add missing imports for tinysql 2022-12-14 03:28:50 -05:00
Izalia Mae 1028256065 add cli commands to approve/deny follow requests 2022-12-13 20:46:03 -05:00
Izalia Mae 556ac420e6 a few extra comments 2022-12-13 19:44:08 -05:00
Izalia Mae 2c287d301f prevent (un)follows from users 2022-12-13 10:14:27 -05:00
Izalia Mae ed066d94af add more info to actor endpoint 2022-12-13 10:09:12 -05:00
Izalia Mae c41cd6e015 first draft 2022-12-13 08:27:09 -05:00
19 changed files with 1418 additions and 821 deletions

View file

@ -3,9 +3,9 @@
There are a number of commands to manage your relay's database and config. You can add `--help` to
any category or command to get help on that specific option (ex. `activityrelay inbox --help`).
Note: Unless specified, it is recommended to run any commands while the relay is shutdown.
A config file can be specified by adding `--config [path/to/config.yaml]`.
Note 2: `activityrelay` is only available via pip or pipx if `~/.local/bin` is in `$PATH`. If it
Note: `activityrelay` is only available via pip or pipx if `~/.local/bin` is in `$PATH`. If it
isn't, use `python3 -m relay` if installed via pip or `~/.local/bin/activityrelay` if installed
via pipx
@ -24,26 +24,35 @@ Run the setup wizard to configure your relay.
activityrelay setup
## Convert
Convert an old `relay.yaml` and `relay.jsonld` to the newer formats.
activityrelay convert [--old-config relay.yaml]
## Config
Manage the relay config
Manage the relay config.
activityrelay config
### List
List the current config key/value pairs
List the current config key/value pairs.
activityrelay config list
### Set
Set a value for a config option
Set a value for a config option.
activityrelay config set <key> <value>
note: The relay must be restarted if setting `log_level`, `workers`, `push_limit`, or `http_timeout`
## Inbox
@ -92,6 +101,32 @@ not exist anymore, use the `inbox remove` command instead.
Note: The relay must be running for this command to work.
## Request
Manage instance follow requests.
### List
List all instances asking to follow the relay.
activityrelay request list
### Approve
Allow an instance to join the relay.
activityrelay request approve <domain>
### Deny
Disallow an instance to join the relay.
activityrelay request deny <domain>
## Whitelist
Manage the whitelisted domains.
@ -120,7 +155,7 @@ Remove a domain from the whitelist.
### Import
Add all current inboxes to the whitelist
Add all current inboxes to the whitelist.
activityrelay whitelist import
@ -132,7 +167,7 @@ Manage the instance ban list.
### List
List the currently banned instances
List the currently banned instances.
activityrelay instance list

View file

@ -2,14 +2,6 @@
## General
### DB
The path to the database. It contains the relay actor private key and all subscribed
instances. If the path is not absolute, it is relative to the working directory.
db: relay.jsonld
### Listener
The address and port the relay will listen on. If the reverse proxy (nginx, apache, caddy, etc)
@ -19,46 +11,6 @@ is running on the same host, it is recommended to change `listen` to `localhost`
port: 8080
### Note
A small blurb to describe your relay instance. This will show up on the relay's home page.
note: "Make a note about your instance here."
### Post Limit
The maximum number of messages to send out at once. For each incoming message, a message will be
sent out to every subscribed instance minus the instance which sent the message. This limit
is to prevent too many outgoing connections from being made, so adjust if necessary.
Note: If the `workers` option is set to anything above 0, this limit will be per worker.
push_limit: 512
### Push Workers
The relay can be configured to use threads to push messages out. For smaller relays, this isn't
necessary, but bigger ones (>100 instances) will want to set this to the number of available cpu
threads.
workers: 0
### JSON GET cache limit
JSON objects (actors, nodeinfo, etc) will get cached when fetched. This will set the max number of
objects to keep in the cache.
json_cache: 1024
## AP
Various ActivityPub-related settings
### Host
The domain your relay will use to identify itself.
@ -66,40 +18,123 @@ The domain your relay will use to identify itself.
host: relay.example.com
### Whitelist Enabled
## Database
If set to `true`, only instances in the whitelist can follow the relay. Any subscribed instances
not in the whitelist will be removed from the inbox list on startup.
### Type
whitelist_enabled: false
The type of SQL database to use. Options:
* sqlite (default)
* postgresql
* mysql
type: sqlite
### Whitelist
### Minimum Connections
A list of domains of instances which are allowed to subscribe to your relay.
The minimum number of database connections to keep open (does nothing at the moment)
whitelist:
- bad-instance.example.com
- another-bad-instance.example.com
min_connections: 0
### Blocked Instances
### Maximum Connections
A list of instances which are unable to follow the instance. If a subscribed instance is added to
the block list, it will be removed from the inbox list on startup.
The maximum number of database connections to open (does nothing at the moment)
blocked_instances:
- bad-instance.example.com
- another-bad-instance.example.com
max_connections: 10
### Blocked Software
## Sqlite
A list of ActivityPub software which cannot follow your relay. This list is empty by default, but
setting this to the below list will block all other relays and prevent relay chains
### Database
blocked_software:
- activityrelay
- aoderelay
- social.seattle.wa.us-relay
- unciarelay
The path to the database file.
database: relay.sqlite3
If the path is relative, it will be relative to the directory the config file is located. For
instance, if the config is located at `/home/izalia/.config/activityrelay/config.yaml`, the
following:
relay.sqlite3
will resolve to:
/home/izalia/.config/activityrelay/relay.sqlite3
## PostgreSQL
### Database
Name of the database to use.
database: activityrelay
### Hostname
The address to use when connecting to the database. A value of `null` will use the default of
`/var/run/postgresql`
hostname: null
### Port
The port to use when connecting to the database. A value of `null` will use the default of `5432`
port: null
### Username
The user to use when connecting to the database. A value of `null` will use the current system
username.
username: null
### Password
The password for the database user.
password: null
## MySQL
### Database
Name of the database to use.
database: activityrelay
### Hostname
The address to use when connecting to the database. A value of `null` will use the default of
`/var/run/mysqld/mysqld.sock`
### Port
The port to use when connecting to the database. A value of `null` will use the default of `3306`
port: null
### Username
The user to use when connecting to the database. A value of `null` will use the current system
username.
username: null
### Password
The password for the database user.
password: null

View file

@ -5,6 +5,19 @@ proxy, and setup the relay to run via a supervisor. Example configs for caddy, n
in `installation/`
## Pre-build Executables
All in one executables can be downloaded from `https://git.pleroma.social/pleroma/relay/-/releases`
under the `Other` section of `Assets`. They don't require any extra setup and can be placed
anywhere. Run the setup wizard
./activityrelay setup
and start it up when done
./activityrelay run
## Pipx
Pipx uses pip and a custom venv implementation to automatically install modules into a Python

View file

@ -14,7 +14,15 @@ a = Analysis(
'aputils.errors',
'aputils.misc',
'aputils.objects',
'aputils.signer'
'aputils.signer',
'tinysql.base',
'tinysql.database',
'tinysql.error',
'tinysql.mysql',
'tinysql.postgresql',
'tinysql.sqlite',
'tinysql.statement'
],
hookspath=[],
hooksconfig={},

View file

@ -1,43 +1,32 @@
# this is the path that the object graph will get dumped to (in JSON-LD format),
# you probably shouldn't change it, but you can if you want.
db: relay.jsonld
general:
# Address the relay will listen on. Set to "0.0.0.0" for any address
listen: 0.0.0.0
# TCP port the relay will listen on
port: 3621
# Domain the relay will advertise itself as
host: relay.example.com
# Listener
listen: 0.0.0.0
port: 8080
database:
# SQL backend to use. Available options: "sqlite", "postgresql", "mysql".
type: sqlite
# Minimum number of database connections to keep open
min_connections: 0
# Maximum number of database connections to open
max_connections: 10
# Note
note: "Make a note about your instance here."
sqlite:
database: relay.sqlite3
# Number of worker threads to start. If 0, use asyncio futures instead of threads.
workers: 0
postgres:
database: activityrelay
hostname: null
port: null
username: null
password: null
# Maximum number of inbox posts to do at once
# If workers is set to 1 or above, this is the max for each worker
push_limit: 512
# The amount of json objects to cache from GET requests
json_cache: 1024
ap:
# This is used for generating activitypub messages, as well as instructions for
# linking AP identities. It should be an SSL-enabled domain reachable by https.
host: 'relay.example.com'
blocked_instances:
- 'bad-instance.example.com'
- 'another-bad-instance.example.com'
whitelist_enabled: false
whitelist:
- 'good-instance.example.com'
- 'another.good-instance.example.com'
# uncomment the lines below to prevent certain activitypub software from posting
# to the relay (all known relays by default). this uses the software name in nodeinfo
#blocked_software:
#- 'activityrelay'
#- 'aoderelay'
#- 'social.seattle.wa.us-relay'
#- 'unciarelay'
mysql:
database: activityrelay
hostname: null
port: null
username: null
password: null

View file

@ -1,4 +1,5 @@
import asyncio
import inspect
import logging
import os
import queue
@ -7,49 +8,41 @@ import threading
import traceback
from aiohttp import web
from aputils import Signer
from datetime import datetime, timedelta
from .config import RelayConfig
from .database import RelayDatabase
from .config import Config
from .database import Database
from .http_client import HttpClient
from .logger import set_level
from .misc import DotDict, check_open_port, set_app
from .views import routes
class Application(web.Application):
def __init__(self, cfgpath):
web.Application.__init__(self)
self['starttime'] = None
self['running'] = False
self['config'] = RelayConfig(cfgpath)
if not self['config'].load():
self['config'].save()
if self.config.is_docker:
self.config.update({
'db': '/data/relay.jsonld',
'listen': '0.0.0.0',
'port': 8080
})
self['workers'] = []
self['last_worker'] = 0
web.Application.__init__(self,
middlewares = [
server_middleware
]
)
set_app(self)
self['database'] = RelayDatabase(self['config'])
self['database'].load()
self['config'] = Config(cfgpath)
self['database'] = Database(**self.config.dbconfig)
self['client'] = HttpClient()
self['client'] = HttpClient(
database = self.database,
limit = self.config.push_limit,
timeout = self.config.timeout,
cache_size = self.config.json_cache
)
self['starttime'] = None
self['signer'] = None
self['running'] = False
self['workers'] = []
self['last_worker'] = 0
self.set_signal_handler()
self.database.create()
with self.database.session as s:
set_level(s.get_config('log_level'))
@property
@ -67,18 +60,32 @@ class Application(web.Application):
return self['database']
@property
def signer(self):
if not self['signer']:
with self.database.session as s:
privkey = s.get_config('privkey')
if not privkey:
self['signer'] = Signer.new(self.config.keyid)
s.put_config('privkey', self['signer'].export())
else:
self['signer'] = Signer(privkey, self.config.keyid)
return self['signer']
@property
def uptime(self):
if not self['starttime']:
return timedelta(seconds=0)
uptime = datetime.now() - self['starttime']
return timedelta(seconds=uptime.seconds)
return datetime.now() - self['starttime']
def push_message(self, inbox, message):
if self.config.workers <= 0:
if len(self['workers']) <= 0:
return asyncio.ensure_future(self.client.post(inbox, message))
worker = self['workers'][self['last_worker']]
@ -90,10 +97,10 @@ class Application(web.Application):
self['last_worker'] = 0
def set_signal_handler(self):
def set_signal_handler(self, enable=True):
for sig in {'SIGHUP', 'SIGINT', 'SIGQUIT', 'SIGTERM'}:
try:
signal.signal(getattr(signal, sig), self.stop)
signal.signal(getattr(signal, sig), self.stop if enable else signal.SIG_DFL)
# some signals don't exist in windows, so skip them
except AttributeError:
@ -109,21 +116,30 @@ class Application(web.Application):
logging.info(f'Starting webserver at {self.config.host} ({self.config.listen}:{self.config.port})')
asyncio.run(self.handle_run())
self.database.disconnect()
def stop(self, *_):
self['running'] = False
def setup(self):
self.client.setup()
async def handle_run(self):
self.set_signal_handler(True)
self['running'] = True
if self.config.workers > 0:
for i in range(self.config.workers):
worker = PushWorker(self)
worker.start()
with self.database.session as s:
workers = s.get_config('workers')
self['workers'].append(worker)
if workers > 0:
for i in range(workers):
worker = PushWorker(self)
worker.start()
self['workers'].append(worker)
runner = web.AppRunner(self, access_log_format='%{X-Forwarded-For}i "%r" %s %b "%{User-Agent}i"')
await runner.setup()
@ -145,6 +161,7 @@ class Application(web.Application):
self['starttime'] = None
self['running'] = False
self['workers'].clear()
self.set_signal_handler(False)
class PushWorker(threading.Thread):
@ -155,12 +172,8 @@ class PushWorker(threading.Thread):
def run(self):
self.client = HttpClient(
database = self.app.database,
limit = self.app.config.push_limit,
timeout = self.app.config.timeout,
cache_size = self.app.config.json_cache
)
self.client = HttpClient()
self.client.setup()
asyncio.run(self.handle_queue())
@ -183,6 +196,24 @@ class PushWorker(threading.Thread):
await self.client.close()
@web.middleware
async def server_middleware(request, handler):
if len(inspect.signature(handler).parameters) == 1:
response = await handler(request)
else:
with request.database.session as s:
response = await handler(request, s)
## make sure there's some sort of response
if response == None:
logging.error(f'No response for handler: {handler}')
response = Response.new_error(500, 'No response')
response.headers['Server'] = 'ActivityRelay'
return response
## Can't sub-class web.Request, so let's just add some properties
def request_actor(self):
try: return self['actor']

View file

@ -1,58 +1,135 @@
import json
import os
import sys
import yaml
from functools import cached_property
from pathlib import Path
from urllib.parse import urlparse
from platform import system
from .misc import DotDict, boolean
from .misc import AppBase, DotDict
RELAY_SOFTWARE = [
'activityrelay', # https://git.pleroma.social/pleroma/relay
'aoderelay', # https://git.asonix.dog/asonix/relay
'feditools-relay' # https://git.ptzo.gdn/feditools/relay
]
DEFAULTS = {
'general_listen': '0.0.0.0',
'general_port': 8080,
'general_host': 'relay.example.com',
'database_type': 'sqlite',
'database_min_connections': 0,
'database_max_connections': 10,
'sqlite_database': Path('relay.sqlite3'),
'postgres_database': 'activityrelay',
'postgres_hostname': None,
'postgres_port': None,
'postgres_username': None,
'postgres_password': None,
'mysql_database': 'activityrelay',
'mysql_hostname': None,
'mysql_port': None,
'mysql_username': None,
'mysql_password': None
}
APKEYS = [
'host',
'whitelist_enabled',
'blocked_software',
'blocked_instances',
'whitelist'
CATEGORY_NAMES = [
'general',
'database',
'sqlite',
'postgres',
'mysql'
]
class RelayConfig(DotDict):
def __init__(self, path):
DotDict.__init__(self, {})
def get_config_dir():
cwd = Path.cwd().joinpath('config.yaml')
plat = system()
if cwd.exists():
return cwd
elif plat == 'Linux':
cfgpath = Path('~/.config/activityrelay/config.yaml').expanduser()
if cfgpath.exists():
return cfgpath
etcpath = Path('/etc/activityrelay/config.yaml')
if etcpath.exists() and os.getuid() == etcpath.stat().st_uid:
return etcpath
elif plat == 'Windows':
cfgpath = Path('~/AppData/Roaming/activityrelay/config.yaml').expanduer()
if cfgpath.exists():
return cfgpath
elif plat == 'Darwin':
cfgpath = Path('~/Library/Application Support/activityaelay/config.yaml')
return cwd
class Config(AppBase, dict):
def __init__(self, path=None):
DotDict.__init__(self, DEFAULTS)
if self.is_docker:
path = '/data/config.yaml'
path = Path('/data/config.yaml')
self._path = Path(path).expanduser()
self.reset()
elif not path:
path = get_config_dir()
else:
path = Path(path).expanduser()
self._path = path
self.load()
def __setitem__(self, key, value):
if key in ['blocked_instances', 'blocked_software', 'whitelist']:
assert isinstance(value, (list, set, tuple))
if key in {'database', 'hostname', 'port', 'username', 'password'}:
key = f'{self.dbtype}_{key}'
elif key in ['port', 'workers', 'json_cache', 'timeout']:
if not isinstance(value, int):
value = int(value)
if (self.is_docker and key in {'general_host', 'general_port'}) or value == '__DEFAULT__':
value = DEFAULTS[key]
elif key == 'whitelist_enabled':
if not isinstance(value, bool):
value = boolean(value)
elif key in {'general_port', 'database_min_connections', 'database_max_connections'}:
value = int(value)
super().__setitem__(key, value)
elif key == 'sqlite_database':
if not isinstance(value, Path):
value = Path(value)
dict.__setitem__(self, key, value)
@property
def db(self):
return Path(self['db']).expanduser().resolve()
def dbconfig(self):
config = {
'type': self['database_type'],
'min_conn': self['database_min_connections'],
'max_conn': self['database_max_connections']
}
if self.dbtype == 'sqlite':
if not self['sqlite_database'].is_absolute():
config['database'] = self.path.with_name(str(self['sqlite_database'])).resolve()
else:
config['database'] = self['sqlite_database'].resolve()
else:
for key, value in self.items():
cat, name = key.split('_', 1)
if self.dbtype == cat:
config[name] = value
return config
@cached_property
def is_docker(self):
return bool(os.getenv('DOCKER_RUNNING'))
@property
@ -60,6 +137,29 @@ class RelayConfig(DotDict):
return self._path
## General config
@property
def host(self):
return self['general_host']
@property
def listen(self):
return self['general_listen']
@property
def port(self):
return self['general_port']
## Database config
@property
def dbtype(self):
return self['database_type']
## AP URLs
@property
def actor(self):
return f'https://{self.host}/actor'
@ -75,117 +175,12 @@ class RelayConfig(DotDict):
return f'{self.actor}#main-key'
@cached_property
def is_docker(self):
return bool(os.environ.get('DOCKER_RUNNING'))
def reset(self):
self.clear()
self.update({
'db': str(self._path.parent.joinpath(f'{self._path.stem}.jsonld')),
'listen': '0.0.0.0',
'port': 8080,
'note': 'Make a note about your instance here.',
'push_limit': 512,
'json_cache': 1024,
'timeout': 10,
'workers': 0,
'host': 'relay.example.com',
'whitelist_enabled': False,
'blocked_software': [],
'blocked_instances': [],
'whitelist': []
})
def ban_instance(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
if self.is_banned(instance):
return False
self.blocked_instances.append(instance)
return True
def unban_instance(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
try:
self.blocked_instances.remove(instance)
return True
except:
return False
def ban_software(self, software):
if self.is_banned_software(software):
return False
self.blocked_software.append(software)
return True
def unban_software(self, software):
try:
self.blocked_software.remove(software)
return True
except:
return False
def add_whitelist(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
if self.is_whitelisted(instance):
return False
self.whitelist.append(instance)
return True
def del_whitelist(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
try:
self.whitelist.remove(instance)
return True
except:
return False
def is_banned(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
return instance in self.blocked_instances
def is_banned_software(self, software):
if not software:
return False
return software.lower() in self.blocked_software
def is_whitelisted(self, instance):
if instance.startswith('http'):
instance = urlparse(instance).hostname
return instance in self.whitelist
self.update(DEFAULTS)
def load(self):
self.reset()
options = {}
try:
@ -201,45 +196,21 @@ class RelayConfig(DotDict):
except FileNotFoundError:
return False
if not config:
return False
for key, value in config.items():
if key in ['ap']:
for k, v in value.items():
if k not in self:
continue
self[k] = v
continue
elif key not in self:
continue
self[key] = value
if self.host.endswith('example.com'):
return False
return True
for key, value in DEFAULTS.items():
cat, name = key.split('_', 1)
self[key] = config.get(cat, {}).get(name, DEFAULTS[key])
def save(self):
config = {
# just turning config.db into a string is good enough for now
'db': str(self.db),
'listen': self.listen,
'port': self.port,
'note': self.note,
'push_limit': self.push_limit,
'workers': self.workers,
'json_cache': self.json_cache,
'timeout': self.timeout,
'ap': {key: self[key] for key in APKEYS}
}
config = {key: {} for key in CATEGORY_NAMES}
with open(self._path, 'w') as fd:
for key, value in self.items():
cat, name = key.split('_', 1)
if isinstance(value, Path):
value = str(value)
config[cat][name] = value
with open(self.path, 'w') as fd:
yaml.dump(config, fd, sort_keys=False)
return config

View file

@ -1,197 +0,0 @@
import aputils
import asyncio
import json
import logging
import traceback
from urllib.parse import urlparse
class RelayDatabase(dict):
def __init__(self, config):
dict.__init__(self, {
'relay-list': {},
'private-key': None,
'follow-requests': {},
'version': 1
})
self.config = config
self.signer = None
@property
def hostnames(self):
return tuple(self['relay-list'].keys())
@property
def inboxes(self):
return tuple(data['inbox'] for data in self['relay-list'].values())
def load(self):
new_db = True
try:
with self.config.db.open() as fd:
data = json.load(fd)
self['version'] = data.get('version', None)
self['private-key'] = data.get('private-key')
if self['version'] == None:
self['version'] = 1
if 'actorKeys' in data:
self['private-key'] = data['actorKeys']['privateKey']
for item in data.get('relay-list', []):
domain = urlparse(item).hostname
self['relay-list'][domain] = {
'domain': domain,
'inbox': item,
'followid': None
}
else:
self['relay-list'] = data.get('relay-list', {})
for domain, instance in self['relay-list'].items():
if self.config.is_banned(domain) or (self.config.whitelist_enabled and not self.config.is_whitelisted(domain)):
self.del_inbox(domain)
continue
if not instance.get('domain'):
instance['domain'] = domain
new_db = False
except FileNotFoundError:
pass
except json.decoder.JSONDecodeError as e:
if self.config.db.stat().st_size > 0:
raise e from None
if not self['private-key']:
logging.info("No actor keys present, generating 4096-bit RSA keypair.")
self.signer = aputils.Signer.new(self.config.keyid, size=4096)
self['private-key'] = self.signer.export()
else:
self.signer = aputils.Signer(self['private-key'], self.config.keyid)
self.save()
return not new_db
def save(self):
with self.config.db.open('w') as fd:
json.dump(self, fd, indent=4)
def get_inbox(self, domain, fail=False):
if domain.startswith('http'):
domain = urlparse(domain).hostname
inbox = self['relay-list'].get(domain)
if inbox:
return inbox
if fail:
raise KeyError(domain)
def add_inbox(self, inbox, followid=None, software=None):
assert inbox.startswith('https'), 'Inbox must be a url'
domain = urlparse(inbox).hostname
instance = self.get_inbox(domain)
if instance:
if followid:
instance['followid'] = followid
if software:
instance['software'] = software
return instance
self['relay-list'][domain] = {
'domain': domain,
'inbox': inbox,
'followid': followid,
'software': software
}
logging.verbose(f'Added inbox to database: {inbox}')
return self['relay-list'][domain]
def del_inbox(self, domain, followid=None, fail=False):
data = self.get_inbox(domain, fail=False)
if not data:
if fail:
raise KeyError(domain)
return False
if not data['followid'] or not followid or data['followid'] == followid:
del self['relay-list'][data['domain']]
logging.verbose(f'Removed inbox from database: {data["inbox"]}')
return True
if fail:
raise ValueError('Follow IDs do not match')
logging.debug(f'Follow ID does not match: db = {data["followid"]}, object = {followid}')
return False
def get_request(self, domain, fail=True):
if domain.startswith('http'):
domain = urlparse(domain).hostname
try:
return self['follow-requests'][domain]
except KeyError as e:
if fail:
raise e
def add_request(self, actor, inbox, followid):
domain = urlparse(inbox).hostname
try:
request = self.get_request(domain)
request['followid'] = followid
except KeyError:
pass
self['follow-requests'][domain] = {
'actor': actor,
'inbox': inbox,
'followid': followid
}
def del_request(self, domain):
if domain.startswith('http'):
domain = urlparse(inbox).hostname
del self['follow-requests'][domain]
def distill_inboxes(self, message):
src_domains = {
message.domain,
urlparse(message.objectid).netloc
}
for domain, instance in self['relay-list'].items():
if domain not in src_domains:
yield instance['inbox']

View file

@ -0,0 +1,17 @@
import tinysql
from .base import DEFAULT_CONFIG, RELAY_SOFTWARE, TABLES
from .connection import Connection
from .rows import ROWS
class Database(tinysql.Database):
def __init__(self, **config):
tinysql.Database.__init__(self, **config,
connection_class = Connection,
row_classes = ROWS
)
def create(self):
self.create_database(TABLES)

67
relay/database/base.py Normal file
View file

@ -0,0 +1,67 @@
from tinysql import Column, Table
DEFAULT_CONFIG = {
'description': ('str', 'Make a note about your relay here'),
'http_timeout': ('int', 10),
'json_cache': ('int', 1024),
'log_level': ('str', 'INFO'),
'name': ('str', 'ActivityRelay'),
'privkey': ('str', ''),
'push_limit': ('int', 512),
'require_approval': ('bool', False),
'version': ('int', 20221211),
'whitelist': ('bool', False),
'workers': ('int', 8)
}
RELAY_SOFTWARE = [
'activity-relay', # https://github.com/yukimochi/Activity-Relay
'activityrelay', # https://git.pleroma.social/pleroma/relay
'aoderelay', # https://git.asonix.dog/asonix/relay
'feditools-relay' # https://git.ptzo.gdn/feditools/relay
]
TABLES = [
Table('config',
Column('key', 'text', unique=True, nullable=False, primary_key=True),
Column('value', 'text')
),
Table('instances',
Column('id', 'serial'),
Column('domain', 'text', unique=True, nullable=False),
Column('actor', 'text'),
Column('inbox', 'text', nullable=False),
Column('followid', 'text'),
Column('software', 'text'),
Column('note', 'text'),
Column('joined', 'datetime', nullable=False),
Column('updated', 'datetime')
),
Table('whitelist',
Column('id', 'serial'),
Column('domain', 'text', unique=True),
Column('created', 'datetime', nullable=False)
),
Table('bans',
Column('id', 'serial'),
Column('name', 'text', unique=True),
Column('note', 'text'),
Column('type', 'text', nullable=False),
Column('created', 'datetime', nullable=False)
),
Table('users',
Column('id', 'serial'),
Column('handle', 'text', unique=True, nullable=False),
Column('domain', 'text', nullable=False),
Column('api_token', 'text'),
Column('created', 'datetime', nullable=False),
Column('updated', 'datetime')
),
Table('tokens',
Column('id', 'text', unique=True, nullable=False, primary_key=True),
Column('userid', 'integer', nullable=False),
Column('created', 'datetime', nullable=False),
Column('updated', 'datetime')
)
]

View file

@ -0,0 +1,239 @@
import tinysql
from datetime import datetime
from urllib.parse import urlparse
from .base import DEFAULT_CONFIG
from ..misc import DotDict
class Connection(tinysql.ConnectionMixin):
## Misc methods
def accept_request(self, domain):
row = self.get_request(domain)
if not row:
raise KeyError(domain)
data = {'joined': datetime.now()}
self.update('instances', data, id=row.id)
def distill_inboxes(self, message):
src_domains = {
message.domain,
urlparse(message.objectid).netloc
}
for instance in self.get_instances():
if instance.domain not in src_domains:
yield instance.inbox
## Delete methods
def delete_ban(self, type, name):
row = self.get_ban(type, name)
if not row:
raise KeyError(name)
self.delete('bans', id=row.id)
def delete_instance(self, domain):
row = self.get_instance(domain)
if not row:
raise KeyError(domain)
self.delete('instances', id=row.id)
def delete_whitelist(self, domain):
row = self.get_whitelist_domain(domain)
if not row:
raise KeyError(domain)
self.delete('whitelist', id=row.id)
## Get methods
def get_ban(self, type, name):
if type not in {'software', 'domain'}:
raise ValueError('Ban type must be "software" or "domain"')
return self.select('bans', name=name, type=type).one()
def get_bans(self, type):
if type not in {'software', 'domain'}:
raise ValueError('Ban type must be "software" or "domain"')
return self.select('bans', type=type).all()
def get_config(self, key):
if key not in DEFAULT_CONFIG:
raise KeyError(key)
row = self.select('config', key=key).one()
if not row:
return DEFAULT_CONFIG[key][1]
return row.value
def get_config_all(self):
rows = self.select('config').all()
config = DotDict({row.key: row.value for row in rows})
for key, data in DEFAULT_CONFIG.items():
if key not in config:
config[key] = data[1]
return config
def get_hostnames(self):
return tuple(row.domain for row in self.get_instances())
def get_instance(self, data):
if data.startswith('http') and '#' in data:
data = data.split('#', 1)[0]
query = 'SELECT * FROM instances WHERE domain = :data OR actor = :data OR inbox = :data'
row = self.execute(query, dict(data=data), table='instances').one()