pipes Package

pipes Package

Pipes and plumbing. Plumbing instances are sequences of pipes. Each pipe is called in order to load, select, transform, sign or output SAML metadata.

exception pyff.pipes.PipeException

Bases: pyff.utils.PyffException

class pyff.pipes.PipelineCallback(entry_point, req)

Bases: object

A delayed pipeline callback used as a post for parse_metadata

class pyff.pipes.PluginsRegistry

Bases: dict

The plugin registry uses pkg_resources.iter_entry_points to list all EntryPoints in the group ‘pyff.pipe’. All pipe entry_points must have the following prototype:

def the_something_func(req,*opts):
pass

Referencing this function as an entry_point using something = module:the_somethig_func in setup.py allows the function to be referenced as ‘something’ in a pipeline.

class pyff.pipes.Plumbing(pipeline, pid)

Bases: object

A plumbing instance represents a basic processing chain for SAML metadata. A simple, yet reasonably complete example:

- load:
    - /var/metadata/registry
    - http://md.example.com
- select:
   - #md:EntityDescriptor[md:IDPSSODescriptor]
- xslt:
    stylesheet: tidy.xsl
- fork:
    - finalize:
        Name: http://example.com/metadata.xml
        cacheDuration: PT1H
        validUntil: PT1D
    - sign:
       key: signer.key
       cert: signer.crt
   - publish: /var/metadata/public/metadata.xml

Running this plumbing would bake all metadata found in /var/metadata/registry and at http://md.example.com into an EntitiesDescriptor element with @Name http://example.com/metadata.xml, @cacheDuration set to 1hr and @validUntil 1 day from the time the ‘finalize’ command was run. The tree woud be transformed using the “tidy” stylesheets and would then be signed (using signer.key) and finally published in /var/metadata/public/metadata.xml

class Request(pl, md, t, name=None, args=None, state=None)

Bases: object

Represents a single request. When processing a set of pipelines a single request is used. Any part of the pipeline may modify any of the fields.

process(pl)

The inner request pipeline processor.

Parameters:pl – The plumbing to run this request through
Plumbing.id
Plumbing.pid
Plumbing.process(md, state=None, t=None)

The main entrypoint for processing a request pipeline. Calls the inner processor.

Parameters:
  • md – The current metadata repository
  • state – The active request state
  • t – The active working document
Returns:

The result of applying the processing pipeline to t.

pyff.pipes.load_pipe(d)

Return a triple callable,name,args of the pipe specified by the object d.

Parameters:d – The following alternatives for d are allowed:
  • d is a string (or unicode) in which case the pipe is named d called with None as args.
  • d is a dict of the form {name: args} (i.e one key) in which case the pipe named name is called with args
  • d is an iterable (eg tuple or list) in which case d[0] is treated as the pipe name and d[1:] becomes the args
pyff.pipes.pipe(*args, **kwargs)

Register the decorated function in the pyff pipe registry :param name: optional name - if None, use function name

pyff.pipes.plumbing(fn)

Create a new plumbing instance by parsing yaml from the filename.

Parameters:fn – A filename containing the pipeline.
Returns:A plumbing object

This uses the resource framework to locate the yaml file which means that pipelines can be shipped as plugins.

builtins Module

Table Of Contents

Previous topic

pyff Package

Next topic

test Package

This Page