Custom Python Code¶
You can write your own code for pyrocore
implementing custom features,
by adding fields, your own command line scripts, or pyrotorque
jobs.
You probably need a solid grasp of Python for this.
Defining Custom Fields¶
Introduction¶
As mentioned in the Configuration Guide, the config.py
script can be used to add
custom logic to your setup. The most common use for this file is adding
custom fields.
To add user-defined fields you can put code describing them into your
~/.pyroscope/config.py
file. You can then use your custom field just
like any built-in one, e.g. issue a command like
rtcontrol --from-view incomplete \* -qco partial_done,name
(see
below examples). They’re also listed when you call
rtcontrol --help-fields
.
Basic Custom Field Code¶
The following is the framework you need to add before putting in your field definitions:
def _custom_fields():
""" Yield custom field definitions.
"""
# Import some commonly needed modules
import os
from pyrocore.torrent import engine, matching
from pyrocore.util import fmt
# PUT CUSTOM FIELD CODE HERE
# Register our factory with the system
custom_field_factories.append(_custom_fields)
In place of the # PUT CUSTOM FIELD CODE HERE
comment you can add any
combination of the examples below, or your own code.
Be sure to do so at the correct indent level, the example snippets
are left-aligned and need to be indented by 4 spaces.
Custom Field Examples¶
Adding rTorrent fields not supported by default¶
# Add rTorrent attributes not available by default
def get_tracker_field(obj, name, aggregator=sum):
"Get an aggregated tracker field."
return aggregator(obj._engine._rpc.t.multicall(obj._fields["hash"], 0, "t.%s=" % name)[0])
yield engine.OnDemandField(int, "peers_connected", "number of connected peers", matcher=matching.FloatFilter)
yield engine.DynamicField(int, "downloaders", "number of completed downloads", matcher=matching.FloatFilter,
accessor=lambda o: get_tracker_field(o, "get_scrape_downloaded"))
yield engine.DynamicField(int, "seeds", "number of seeds", matcher=matching.FloatFilter,
accessor=lambda o: get_tracker_field(o, "get_scrape_complete"))
yield engine.DynamicField(int, "leeches", "number of leeches", matcher=matching.FloatFilter,
accessor=lambda o: get_tracker_field(o, "get_scrape_incomplete"))
yield engine.DynamicField(engine.untyped, "lastscraped", "time of last scrape", matcher=matching.TimeFilter,
accessor=lambda o: get_tracker_field(o, "get_scrape_time_last", max),
formatter=lambda dt: fmt.human_duration(float(dt), precision=2, short=True))
# Add peer attributes not available by default
def get_peer_data(obj, name, aggregator=None):
"Get some peer data via a multicall."
aggregator = aggregator or (lambda _: _)
result = obj._engine._rpc.p.multicall(obj._fields["hash"], 0, "p.%s=" % name)
return aggregator([i[0] for i in result])
yield engine.DynamicField(set, "peers_ip", "list of IP addresses for connected peers",
matcher=matching.TaggedAsFilter, formatter=", ".join,
accessor=lambda o: set(get_peer_data(o, "address")))
Checking that certain files are present¶
# Add file checkers
def has_nfo(obj):
"Check for .NFO file."
pathname = obj.path
if pathname and os.path.isdir(pathname):
return any(i.lower().endswith(".nfo") for i in os.listdir(pathname))
else:
return False if pathname else None
def has_thumb(obj):
"Check for folder.jpg file."
pathname = obj.path
if pathname and os.path.isdir(pathname):
return any(i.lower() == "folder.jpg" for i in os.listdir(pathname))
else:
return False if pathname else None
yield engine.DynamicField(engine.untyped, "has_nfo", "does download have a .NFO file?",
matcher=matching.BoolFilter, accessor=has_nfo,
formatter=lambda val: "NFO" if val else "!DTA" if val is None else "----")
yield engine.DynamicField(engine.untyped, "has_thumb", "does download have a folder.jpg file?",
matcher=matching.BoolFilter, accessor=has_thumb,
formatter=lambda val: "THMB" if val else "!DTA" if val is None else "----")
Calculating information about partial downloads¶
Note that the partial_done
value can be a little lower than it
actually should be, when chunks shared by different files are not yet
complete; but it will eventually reach 100
when all selected chunks
are downloaded in full.
# Fields for partial downloads
def partial_info(obj, name):
"Helper for partial download info"
try:
return obj._fields[name]
except KeyError:
f_attr = ["get_completed_chunks", "get_size_chunks", "get_range_first", "get_range_second"]
chunk_size = obj.fetch("chunk_size")
prev_chunk = -1
size, completed, chunks = 0, 0, 0
for f in obj._get_files(f_attr):
if f.prio: # selected?
shared = int(f.range_first == prev_chunk)
size += f.size
completed += f.completed_chunks - shared
chunks += f.size_chunks - shared
prev_chunk = f.range_second - 1
obj._fields["partial_size"] = size
obj._fields["partial_missing"] = (chunks - completed) * chunk_size
obj._fields["partial_done"] = 100.0 * completed / chunks if chunks else 0.0
return obj._fields[name]
yield engine.DynamicField(int, "partial_size", "bytes selected for download",
matcher=matching.ByteSizeFilter,
accessor=lambda o: partial_info(o, "partial_size"))
yield engine.DynamicField(int, "partial_missing", "bytes missing from selected chunks",
matcher=matching.ByteSizeFilter,
accessor=lambda o: partial_info(o, "partial_missing"))
yield engine.DynamicField(float, "partial_done", "percent complete of selected chunks",
matcher=matching.FloatFilter,
accessor=lambda o: partial_info(o, "partial_done"))
Extract TV data from item name¶
This defines the tv_series
and tv_episode
fields, that are
non-empty when the item name follows the “usual” naming conventions. Try
it using something like
rtcontrol loaded=-2w traits=tv -co tv_series,tv_episode,name
.
# Map name field to TV series name, if applicable, else an empty string
from pyrocore.util import traits
def tv_mapper(obj, name, templ):
"Helper for TV name mapping"
try:
return obj._fields[name]
except KeyError:
itemname = obj.name
result = ""
kind, info = traits.name_trait(itemname, add_info=True)
if kind == "tv":
try:
info["show"] = ' '.join([i.capitalize() for i in info["show"].replace('.',' ').replace('_',' ').split()])
result = templ % info
except KeyError, exc:
#print exc
pass
obj._fields[name] = result
return result
yield engine.DynamicField(fmt.to_unicode, "tv_series", "series name of a TV item",
matcher=matching.PatternFilter, accessor= lambda o: tv_mapper(o, "tv_series", "%(show)s"))
yield engine.DynamicField(fmt.to_unicode, "tv_episode", "series name and episode number of a TV item",
matcher=matching.PatternFilter, accessor= lambda o: tv_mapper(o, "tv_episode", "%(show)s.S%(season)sE%(episode)s"))
Only start items that you have disk space for¶
This works together with rTorrent Queue Manager, so that only items that pass a disk space check are actually started.
The first step is to add a custom field that checks whether an item has
room on the target device. As with the other examples, place this in
your config.py
(read the 1st two sections, before the “Examples” one).
# Disk space check
def has_room(obj):
"Check disk space."
pathname = obj.path
if pathname and os.path.exists(pathname):
stats = os.statvfs(pathname)
return stats.f_bavail * stats.f_frsize - int(diskspace_threshold_mb) * 1024**2 > obj.size * (1.0 - obj.done / 100.0)
else:
return None
yield engine.DynamicField(engine.untyped, "has_room", "check whether the download will fit on its target device",
matcher=matching.BoolFilter, accessor=has_room,
formatter=lambda val: "OK" if val else "??" if val is None else "NO")
globals().setdefault("diskspace_threshold_mb", "500")
Note that you can set the threshold of space to keep free (in MiB) in
your configuration, and the default is 500MiB. You should keep your
close_low_diskspace
schedule for rTorrent as a fallback, and set
diskspace_threshold_mb
higher than the limit given there (so
that normally, it never triggers).
And now, all you need is to add has_room=y
to your
job.queue.startable
conditions. Done.
Adding Custom Template Helpers¶
In templating contexts, there is an empty c
namespace (think custom
or config
),
just like h
for helpers.
You can populate that namespace with your own helpers as you need them,
from simple string transformations to calling external programs or web interfaces.
The following example illustrates the concept, and belongs into ~/.pyroscope/config.py
.
def _hostname(ip):
"""Helper to e.g. look up peer IPs."""
import socket
return socket.gethostbyaddr(ip)[0] if ip else ip
custom_template_helpers.hostname = _hostname
This demonstrates the call of that helper using a custom field, a real use-case would be to resolve peer IPs and the like.
$ rtcontrol -qo '{{d.fetch("custom_ip")}} → {{d.fetch("custom_ip") | c.hostname}}' // -/1
8.8.8.8 → google-public-dns-a.google.com
Writing Your Own Scripts¶
Introduction¶
The pyrocore
Python package contains powerful helper classes that
make remote access to rTorrent child’s play (see API Documentation).
And your tools get the same Look & Feel like the built-in PyroScope
commands, as long as you use the provided base class
pyrocore.scripts.base.ScriptBaseWithConfig
.
See for yourself:
#! /usr/bin/env python-pyrocore
# -*- coding: utf-8 -*-
# Enter the magic kingdom
from pyrocore import config
from pyrocore.scripts import base
class UserScript(base.ScriptBaseWithConfig):
"""
Just some script you wrote.
"""
# argument description for the usage information
ARGS_HELP = "<arg_1>... <arg_n>"
# set your own version
VERSION = '1.0'
# (optionally) define your licensing
COPYRIGHT = u'Copyright (c) …'
def add_options(self):
""" Add program options.
"""
super(UserScript, self).add_options()
# basic options
##self.add_bool_option("-n", "--dry-run",
## help="don't do anything, just tell what would happen")
def mainloop(self):
""" The main loop.
"""
# Grab your magic wand
proxy = config.engine.open()
# Wave it
torrents = list(config.engine.items())
# Abracadabra
print "You have loaded %d torrents tracked by %d trackers." % (
len(torrents),
len(set(i.alias for i in torrents)),
)
self.LOG.info("XMLRPC stats: %s" % proxy)
if __name__ == "__main__":
base.ScriptBase.setup()
UserScript().run()
Another full example is the dynamic seed throttle script.
Note
If you wondered about the first line referring to a python-pyrocore
command, that is an alias the installation scripts create for
the Python interpreter of the pyrocore virtualenv. This way,
your script will always use the correct environment that actually
offers the right packages.
For simple calls, you can also use the rtxmlrpc
command on a shell
prompt, see Using ‘rtxmlrpc’ for that. For a reference of the rTorrent
XMLRPC interface, see rTorrent XMLRPC. Another common way to add your
own extensions is Defining Custom Fields, usable by rtcontrol
just
like built-in ones.
Interactive use in a Python shell¶
You can also access rTorrent interactively, like this:
>>> from pyrocore import connect
>>> rt = connect()
>>> len(set(i.tracker for i in rt.items()))
2
>>> rt.engine_software
'rTorrent 0.9.2/0.13.2'
>>> rt.uptime
1325.6771779060364
>>> proxy = rt.open()
>>> len(proxy.system.listMethods())
1033
Using pyrocore
as a library in other projects¶
The example in the first section is an easy way to create user-defined
scripts. If you want to use pyrocore
‘s features in another runtime
environment, you just have to load the configuration manually (what
pyrocore.scripts.base.ScriptBaseWithConfig
does for you otherwise).
# Details depend on the system you want to extend, of course
from some_system import plugin
from pyrocore import error
from pyrocore.util import load_config
def my_rtorrent_plugin():
""" Initialize plugin.
"""
try:
load_config.ConfigLoader().load()
except error.LoggableError as exc:
# Handle accordingly...
else:
# Do some other stuff...
plugin.register(my_rtorrent_plugin)
Code snippets¶
Note
The following snippets are meant to be placed and executed within
the mainloop
of the script skeleton found in Introduction.
Accessing the files in a download item¶
To get all the files for several items at once, we combine
system.multicall
and f.multicall
to one big efficient mess.
from pprint import pprint, pformat
# The attributes we want to fetch
methods = [
"f.get_path",
"f.get_size_bytes",
"f.get_last_touched",
"f.get_priority",
"f.is_created",
"f.is_open",
]
# Build the multicall argument
f_calls = [method + '=' for method in methods]
calls = [{"methodName": "f.multicall", "params": [infohash, 0] + f_calls}
for infohash in self.args
]
# Make the calls
multicall = proxy.system.multicall
result = multicall(calls)
# Print the results
for infohash, (files,) in zip(self.args, result):
print ("~~~ %s [%d file(s)] " % (infohash, len(files))).ljust(78, '~')
pprint(files)
self.LOG.info("Multicall stats: %s" % multicall)
Writing Custom Jobs¶
First off, you really need to know a good amount of Python to be able to do this.
But if you do, you can easily add your own background processing,
more versatile and more efficient than calling rtcontrol
in a cron job.
The description here is terse, and mostly just tells you where to look for code examples,
and the basics of how a job implementation interacts with the core system.
Note
While some effort will be spent on keeping the API backwards compatible, there is no guarantee of a stable API. Follow the commit log and changelogs of releases to get notified when you need to adapt your code.
Jobs are created during pyrotorque
startup and registered with the scheduler.
Configuration is taken from the [TORQUE]
section of torque.ini
,
and any job.«job-name».«param-name»
setting contributes to a job named job-name
.
The handler
, schedule
, and active
settings are used by the core,
the rest is passed to the handler
class for customization and depends on the job type.
To locate the job implementation, handler
contains a module.path:ClassName
coordinate of its class.
So job.foo.handler = my.code::FooJob
registers FooJob
under the name foo
.
This means a job can be scheduled several times,
given the right configuration and if the job implementation is designed for it.
The given module must be importable of course,
i.e. pip install
it into your pyrocore
virtualenv.
The schedule
defines the call frequency of the job’s run
method,
and active
allows to easily disable a job without removing its configuration
– which is used to provide all the default jobs and their settings.
A job with active = False
is simply ignored and not added to the scheduler on startup.
The most simple of jobs is the EngineStats
one.
Click on the link and then on [source]
to see its source code.
Some noteworthy facts:
- the initializer gets passed a
config
parameter, holding all the settings fromtorque.ini
for a particular job instance, with thejob.«name»
prefix removed. pyrocore.config
is imported asconfig_ini
, to not clash with theconfig
dict passed into jobs.- create a
LOG
attribute as shown, for your logging needs. - to interact with rTorrent, open a proxy connection in
run
. - the
InfluxDB
job shows how to access config parameters, e.g.self.config.dbname
. - raise
UserError
in the initializer to report configuration mishaps and preventpyrotorque
from starting.
More complex jobs that you can look at are the
pyrocore.torrent.watch.TreeWatch
and
pyrocore.torrent.queue.QueueManager
ones.