petl - Extract, Transform and Load (Tables of Data)

petl is a tentative Python package for extracting, transforming and loading tables of data.

E.g., given the following data in a file at ‘example.csv’ in the current working directory:

foo,bar,baz
a,1,3.4
b,2,7.4
c,6,2.2
d,9,8.1

...the interactive session below illustrates some simple uses of this module:

>>> from petl import *
>>> table1 = fromcsv('example.csv')
>>> look(table1)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'a'   | '1'   | '3.4' |
+-------+-------+-------+
| 'b'   | '2'   | '7.4' |
+-------+-------+-------+
| 'c'   | '6'   | '2.2' |
+-------+-------+-------+
| 'd'   | '9'   | '8.1' |
+-------+-------+-------+

>>> table2 = convert(table1, 'foo', 'upper')
>>> table3 = convert(table2, 'bar', int)
>>> table4 = convert(table3, 'baz', float)
>>> table5 = extend(table4, 'quux', expr('{bar} * {baz}'))
>>> table6 = cut(table5, 'foo', 'quux')
>>> table7 = selectgt(table6, 'quux', 10)
>>> table8 = sort(table7, 'quux')
>>> look(table8)
+-------+--------------------+
| 'foo' | 'quux'             |
+=======+====================+
| 'C'   | 13.200000000000001 |
+-------+--------------------+
| 'B'   | 14.8               |
+-------+--------------------+
| 'D'   | 72.89999999999999  |
+-------+--------------------+

>>> tocsv(table8, 'output.csv')

Installation

This module is available from the Python Package Index. On Linux distributions you should be able to do easy_install petl or pip install petl. On Windows or Mac you can download manually, extract and run python setup.py install.

Table containers and row iterators

This package defines the following convention for objects acting as containers of tabular data and supporting row-oriented iteration over the data.

A table container (or simply a table) is any object which satisfies the following:

  1. implements the __iter__ method
  2. __iter__ returns a row iterator (see below)
  3. all row iterators returned by __iter__ are independent, i.e., consuming items from one iterator will not affect any other iterators

A row iterator is an iterator which satisfies the following:

  1. each item returned by the iterator is either a list or a tuple
  2. the first item returned by the iterator is a header row comprising a list or tuple of fields
  3. each subsequent item returned by the iterator is a data row comprising a list or tuple of data values
  4. a field is typically a string (str or unicode) but may be an object of any type as long as it implements __str__, is immutable (can be used as a dictionary key) and is pickleable
  5. a data value is any pickleable object

So, for example, the list of lists shown below is a table:

>>> table = [['foo', 'bar'], ['a', 1], ['b', 2]]

Note that, under this convention, an object returned by the csv.reader() function from the standard Python csv module is a row iterator and not a table, because it can only be iterated over once, e.g.:

>>> from StringIO import StringIO
>>> import csv
>>> csvdata = """foo,bar
... a,1
... b,2
... """
>>> rowiterator = csv.reader(StringIO(csvdata))
>>> for row in rowiterator:
...     print row
...
['foo', 'bar']
['a', '1']
['b', '2']
>>> for row in rowiterator:
...     print row
...
>>> # can only iterate once

However, it is straightforward to define functions that support the above convention for table objects and provide access to data from CSV or other types of file or data source, see e.g. the fromcsv() function in this package.

The main reason for requiring that table objects support independent row iterators (point 3) is that data from a table may need to be iterated over several times. E.g., when using petl in interactive mode to build up a sequence of data transformation steps, the user might want to examine outputs from several intermediate steps.

Note that this convention does not place any restrictions on the lengths of header and data rows. A table may return a header row and/or data rows of varying lengths.

Caching / memoization

This package tries to make efficient use of memory by using iterators and lazy evaluation where possible. However, some transformations cannot be done without building data structures, either in memory or on disk.

An example is the sort() function, which will either sort a table entirely in memory, or will sort the table in memory in chunks, writing chunks to disk and performing a final merge sort on the chunks. Which strategy is used will depend on the arguments passed into the sort() function when it is called.

In either case, the sorting can take some time, and if the sorted data will be used more than once, it is obviously undesirable to throw away the sorted data and start again from scratch each time. It is better to cache (a.k.a., memoize) the sorted data, if possible, so it can be re-used. However, if a table upstream of the sort is mutable, there needs to be some way of discovering whether data have been changed since the last sort was performed, and hence whether the cached data are still fresh or not.

There are also cases where, even though data are generated in a purely iterative fashion, some programs may still want to cache some or all of the data. E.g., where data are calculated dynamically and are relatively expensive to compute, or require scanning many rows, or where data are being retrieved via a network and there is latency. Again, there needs to be some way of finding out whether cached data are fresh or not.

To support caching, it is recommended (but not required) that table container objects also implement the cachetag method. This method should return an integer which changes whenever the table’s fields or data changes (i.e., a row iterator would yield a different sequence of items). All petl table objects implement this method, where applicable.

Note in particular that the functions fromcsv(), frompickle() and fromsqlite3() return tables that implement the cachetag method, and that the implementation of cachetag in these cases depends on a checksum function operating on the underlying data file. By default, statsum() is used as the checksum function, which is cheap to compute but crude because it relies only on file name, size and time of modification, and on some systems this will not reveal changes within the same second that preserve file size. If you need a finer level of granularity, use either adler32sum() or crc32sum() instead. These can be passed in as arguments to fromcsv(), frompickle() and fromsqlite3(), or can be set globally, e.g.:

>>> import petl.io
>>> petl.io.defaultsumfun = petl.io.adler32sum

Care must be taken to ensure correct implementation of the cachetag method where a table is generating data dynamically from another table or tables. In these cases, the state of the upstream tables must be considered when generating a cachetag value. One strategy is to construct cachetag values by hashing the internal configuration attributes of a table, along with the cachetag values from any tables immediately upstream.

In some cases, it may be difficult to determine whether data have changed, e.g., where data are being retrieved from a database. In these cases it is suggested that table objects use a user-configurable time to live (TTL) to generate cachetag values. E.g., where a database table is updated once a day at most, a TTL of a few hours would enable data to be cached during an interactive session, which might improve usability for someone exploring the data or developing a transformation script.

Utility functions

petl.header(table)

Return the header row for the given table. E.g.:

>>> from petl import header
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2]]
>>> header(table)
['foo', 'bar']
petl.data(table, *sliceargs)

Return an iterator over the data rows for the given table. E.g.:

>>> from petl import data
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2]]
>>> it = data(table)
>>> it.next()
['a', 1]
>>> it.next()
['b', 2]

Changed in version 0.3.

Positional arguments can be used to slice the data rows. The sliceargs are passed to itertools.islice().

petl.dataslice(table, *args)

Deprecated since version 0.3.

Use data() instead, it supports slice arguments.

petl.records(table, missing=None)

Return an iterator over the data in the table, yielding each row as a dictionary of values indexed by field name. E.g.:

>>> from petl import records
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2]]
>>> it = records(table)
>>> it.next()
{'foo': 'a', 'bar': 1}
>>> it.next()
{'foo': 'b', 'bar': 2}
>>> it.next()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration

Short rows are padded, e.g.:

>>> table = [['foo', 'bar'], ['a', 1], ['b']]
>>> it = records(table)
>>> it.next()
{'foo': 'a', 'bar': 1}
>>> it.next()
{'foo': 'b', 'bar': None}
>>> it.next()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration
petl.fieldnames(table)

Return the string values of all fields for the given table. If the fields are strings, then this function is equivalent to header(), i.e.:

>>> from petl import header, fieldnames
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2]]
>>> header(table)
['foo', 'bar']
>>> fieldnames(table)
['foo', 'bar']
>>> header(table) == fieldnames(table)
True

Allows for custom field objects, e.g.:

>>> class CustomField(object):
...     def __init__(self, id, description):
...         self.id = id
...         self.description = description
...     def __str__(self):
...         return self.id
...     def __repr__(self):
...         return 'CustomField(%r, %r)' % (self.id, self.description)
... 
>>> table = [[CustomField('foo', 'Get some foo.'), CustomField('bar', 'A lot of bar.')], 
...          ['a', 1], 
...          ['b', 2]]
>>> header(table)
[CustomField('foo', 'Get some foo.'), CustomField('bar', 'A lot of bar.')]
>>> fieldnames(table)    
['foo', 'bar']
petl.rowcount(table)

Count the number of data rows in a table. E.g.:

>>> from petl import rowcount
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2]]
>>> rowcount(table)
2
petl.look(table, *sliceargs)

Format a portion of the table as text for inspection in an interactive session. E.g.:

>>> from petl import look
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2]]
>>> look(table)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 1     |
+-------+-------+
| 'b'   | 2     |
+-------+-------+

Any irregularities in the length of header and/or data rows will appear as blank cells, e.g.:

>>> table = [['foo', 'bar'], ['a'], ['b', 2, True]]
>>> look(table)
+-------+-------+------+
| 'foo' | 'bar' |      |
+=======+=======+======+
| 'a'   |       |      |
+-------+-------+------+
| 'b'   | 2     | True |
+-------+-------+------+

Changed in version 0.3.

Positional arguments can be used to slice the data rows. The sliceargs are passed to itertools.islice().

petl.lookall(table)

Format the entire table as text for inspection in an interactive session.

N.B., this will load the entire table into memory.

petl.see(table, *sliceargs)

Format a portion of a table as text in a column-oriented layout for inspection in an interactive session. E.g.:

>>> from petl import see
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2]]
>>> see(table)
'foo': 'a', 'b'
'bar': 1, 2

Useful for tables with a larger number of fields.

Changed in version 0.3.

Positional arguments can be used to slice the data rows. The sliceargs are passed to itertools.islice().

petl.values(table, field, *sliceargs)

Return an iterator over values in a given field or fields. E.g.:

>>> from petl import values
>>> table = [['foo', 'bar'], ['a', True], ['b'], ['b', True], ['c', False]]
>>> foo = values(table, 'foo')
>>> foo.next()
'a'
>>> foo.next()
'b'
>>> foo.next()
'b'
>>> foo.next()
'c'
>>> foo.next()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration

The field argument can be a single field name or index (starting from zero) or a tuple of field names and/or indexes.

If rows are uneven, any missing values are skipped, e.g.:

>>> table = [['foo', 'bar'], ['a', True], ['b'], ['b', True], ['c', False]]
>>> bar = values(table, 'bar')
>>> bar.next()
True
>>> bar.next()
True
>>> bar.next()
False
>>> bar.next()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration

More than one field can be selected, e.g.:

>>> table = [['foo', 'bar', 'baz'],
...          [1, 'a', True],
...          [2, 'bb', True],
...          [3, 'd', False]]
>>> foobaz = values(table, ('foo', 'baz'))
>>> foobaz.next()
(1, True)
>>> foobaz.next()
(2, True)
>>> foobaz.next()
(3, False)
>>> foobaz.next()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
StopIteration

Changed in version 0.3.

Positional arguments can be used to slice the data rows. The sliceargs are passed to itertools.islice().

petl.valueset(table, field)

Deprecated since version 0.3.

Use set(values(table, *fields)) instead, see also values().

petl.valuecount(table, field, value)

Count the number of occurrences of value under the given field. Returns the absolute count and relative frequency as a pair. E.g.:

>>> from petl import valuecount
>>> table = (('foo', 'bar'), ('a', 1), ('b', 2), ('b', 7))
>>> n, f = valuecount(table, 'foo', 'b')
>>> n
2
>>> f
0.6666666666666666

The field argument can be a single field name or index (starting from zero) or a tuple of field names and/or indexes.

petl.valuecounter(table, field)

Find distinct values for the given field and count the number of occurrences. Returns a dict mapping values to counts. E.g.:

>>> from petl import valuecounter
>>> table = [['foo', 'bar'], ['a', True], ['b'], ['b', True], ['c', False]]
>>> c = valuecounter(table, 'foo')
>>> c['a']
1
>>> c['b']
2
>>> c['c']
1
>>> c
Counter({'b': 2, 'a': 1, 'c': 1})

The field argument can be a single field name or index (starting from zero) or a tuple of field names and/or indexes.

petl.valuecounts(table, field)

Find distinct values for the given field and count the number and relative frequency of occurrences. Returns a table mapping values to counts, with most common values first. E.g.:

>>> from petl import valuecounts
>>> table = [['foo', 'bar'], ['a', True], ['b'], ['b', True], ['c', False]]
>>> valuecounts(table, 'foo')
[('value', 'count', 'frequency'), ('b', 2, 0.5), ('a', 1, 0.25), ('c', 1, 0.25)]

The field argument can be a single field name or index (starting from zero) or a tuple of field names and/or indexes.

Can be combined with look, e.g.:

>>> from petl import look    
>>> look(valuecounts(table, 'foo'))
+---------+---------+-------------+
| 'value' | 'count' | 'frequency' |
+=========+=========+=============+
| 'b'     | 2       | 0.5         |
+---------+---------+-------------+
| 'a'     | 1       | 0.25        |
+---------+---------+-------------+
| 'c'     | 1       | 0.25        |
+---------+---------+-------------+

>>> look(valuecounts(table, 'bar'))
+---------+---------+--------------------+
| 'value' | 'count' | 'frequency'        |
+=========+=========+====================+
| True    | 2       | 0.6666666666666666 |
+---------+---------+--------------------+
| False   | 1       | 0.3333333333333333 |
+---------+---------+--------------------+
petl.unique(table, field)

Return True if there are no duplicate values for the given field(s), otherwise False. E.g.:

>>> from petl import unique
>>> table = [['foo', 'bar'], ['a', 1], ['b'], ['b', 2], ['c', 3, True]]
>>> unique(table, 'foo')
False
>>> unique(table, 'bar')
True

The field argument can be a single field name or index (starting from zero) or a tuple of field names and/or indexes.

petl.limits(table, field)

Find minimum and maximum values under the given field. E.g.:

>>> from petl import limits
>>> t1 = [['foo', 'bar'], ['a', 1], ['b', 2], ['b', 3]]
>>> minv, maxv = limits(t1, 'bar')
>>> minv
1
>>> maxv
3

The field argument can be a field name or index (starting from zero).

petl.stats(table, field)

Calculate basic descriptive statistics on a given field. E.g.:

>>> from petl import stats
>>> table = [['foo', 'bar', 'baz'],
...          ['A', 1, 2],
...          ['B', '2', '3.4'],
...          [u'B', u'3', u'7.8', True],
...          ['D', 'xyz', 9.0],
...          ['E', None]]
>>> stats(table, 'bar')    
{'count': 3, 'errors': 2, 'min': 1.0, 'max': 3.0, 'sum': 6.0, 'mean': 2.0}

The field argument can be a field name or index (starting from zero).

petl.lenstats(table, field)[source]

Convenience function to report statistics on value lengths under the given field. E.g.:

>>> from petl import lenstats    
>>> table1 = [['foo', 'bar'],
...           [1, 'a'],
...           [2, 'aaa'],
...           [3, 'aa'],
...           [4, 'aaa'],
...           [5, 'aaaaaaaaaaa']]
>>> lenstats(table1, 'bar')
{'count': 5, 'errors': 0, 'min': 1.0, 'max': 11.0, 'sum': 20.0, 'mean': 4.0}
petl.rowlengths(table)

Report on row lengths found in the table. E.g.:

>>> from petl import look, rowlengths
>>> table = [['foo', 'bar', 'baz'],
...          ['A', 1, 2],
...          ['B', '2', '3.4'],
...          [u'B', u'3', u'7.8', True],
...          ['D', 'xyz', 9.0],
...          ['E', None],
...          ['F', 9]]
>>> look(rowlengths(table))
+----------+---------+
| 'length' | 'count' |
+==========+=========+
| 3        | 3       |
+----------+---------+
| 2        | 2       |
+----------+---------+
| 4        | 1       |
+----------+---------+
petl.typecounter(table, field)

Count the number of values found for each Python type. E.g.:

>>> from petl import typecounter
>>> table = [['foo', 'bar', 'baz'],
...          ['A', 1, 2],
...          ['B', u'2', '3.4'],
...          [u'B', u'3', u'7.8', True],
...          ['D', u'xyz', 9.0],
...          ['E', 42]]
>>> typecounter(table, 'foo')
Counter({'str': 4, 'unicode': 1})
>>> typecounter(table, 'bar')
Counter({'unicode': 3, 'int': 2})
>>> typecounter(table, 'baz')
Counter({'int': 1, 'float': 1, 'unicode': 1, 'str': 1})

The field argument can be a field name or index (starting from zero).

petl.typecounts(table, field)

Count the number of values found for each Python type and return a table mapping class names to counts. E.g.:

>>> from petl import look, typecounts
>>> table = [['foo', 'bar', 'baz'],
...          ['A', 1, 2],
...          ['B', u'2', '3.4'],
...          [u'B', u'3', u'7.8', True],
...          ['D', u'xyz', 9.0],
...          ['E', 42]]
>>> look(typecounts(table, 'foo'))
+-----------+---------+
| 'type'    | 'count' |
+===========+=========+
| 'str'     | 4       |
+-----------+---------+
| 'unicode' | 1       |
+-----------+---------+

>>> look(typecounts(table, 'bar'))
+-----------+---------+
| 'type'    | 'count' |
+===========+=========+
| 'unicode' | 3       |
+-----------+---------+
| 'int'     | 2       |
+-----------+---------+

>>> look(typecounts(table, 'baz'))
+-----------+---------+
| 'type'    | 'count' |
+===========+=========+
| 'int'     | 1       |
+-----------+---------+
| 'float'   | 1       |
+-----------+---------+
| 'unicode' | 1       |
+-----------+---------+
| 'str'     | 1       |
+-----------+---------+

The field argument can be a field name or index (starting from zero).

petl.typeset(table, field)

Return a set containing all Python types found for values in the given field. E.g.:

>>> from petl import typeset
>>> table = [['foo', 'bar', 'baz'],
...          ['A', 1, '2'],
...          ['B', u'2', '3.4'],
...          [u'B', u'3', '7.8', True],
...          ['D', u'xyz', 9.0],
...          ['E', 42]]
>>> typeset(table, 'foo') 
set([<type 'str'>, <type 'unicode'>])
>>> typeset(table, 'bar') 
set([<type 'int'>, <type 'unicode'>])
>>> typeset(table, 'baz') 
set([<type 'float'>, <type 'str'>])

The field argument can be a field name or index (starting from zero).

petl.parsecounter(table, field, parsers={'int': <type 'int'>, 'float': <type 'float'>})

Count the number of str or unicode values under the given fields that can be parsed as ints, floats or via custom parser functions. Return a pair of Counter objects, the first mapping parser names to the number of strings successfully parsed, the second mapping parser names to the number of errors. E.g.:

>>> from petl import parsecounter
>>> table = [['foo', 'bar', 'baz'],
...          ['A', 'aaa', 2],
...          ['B', u'2', '3.4'],
...          [u'B', u'3', u'7.8', True],
...          ['D', '3.7', 9.0],
...          ['E', 42]]
>>> counter, errors = parsecounter(table, 'bar')
>>> counter
Counter({'float': 3, 'int': 2})
>>> errors
Counter({'int': 2, 'float': 1})

The field argument can be a field name or index (starting from zero).

petl.parsecounts(table, field, parsers={'int': <type 'int'>, 'float': <type 'float'>})

Count the number of str or unicode values that can be parsed as ints, floats or via custom parser functions. Return a table mapping parser names to the number of values successfully parsed and the number of errors. E.g.:

>>> from petl import look, parsecounts
>>> table = [['foo', 'bar', 'baz'],
...          ['A', 'aaa', 2],
...          ['B', u'2', '3.4'],
...          [u'B', u'3', u'7.8', True],
...          ['D', '3.7', 9.0],
...          ['E', 42]]
>>> look(parsecounts(table, 'bar'))
+---------+---------+----------+
| 'type'  | 'count' | 'errors' |
+=========+=========+==========+
| 'float' | 3       | 1        |
+---------+---------+----------+
| 'int'   | 2       | 2        |
+---------+---------+----------+

The field argument can be a field name or index (starting from zero).

petl.dateparser(format)

Return a function to parse strings as datetime.date objects using a given format. E.g.:

>>> from petl import dateparser
>>> isodate = dateparser('%Y-%m-%d')
>>> isodate('2002-12-25')
datetime.date(2002, 12, 25)
>>> isodate('2002-02-30')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "petl/util.py", line 1032, in parser
    return parser
  File "/usr/lib/python2.7/_strptime.py", line 440, in _strptime
    datetime_date(year, 1, 1).toordinal() + 1
ValueError: day is out of range for month

Can be used with parsecounts(), e.g.:

>>> from petl import look, parsecounts, dateparser
>>> table = [['when', 'who'],
...          ['2002-12-25', 'Alex'],
...          ['2004-09-12', 'Gloria'],
...          ['2002-13-25', 'Marty'],
...          ['2002-02-30', 'Melman']]
>>> parsers={'date': dateparser('%Y-%m-%d')}
>>> look(parsecounts(table, 'when', parsers))
+--------+---------+----------+
| 'type' | 'count' | 'errors' |
+========+=========+==========+
| 'date' | 2       | 2        |
+--------+---------+----------+
petl.timeparser(format)

Return a function to parse strings as datetime.time objects using a given format. E.g.:

>>> from petl import timeparser
>>> isotime = timeparser('%H:%M:%S')
>>> isotime('00:00:00')
datetime.time(0, 0)
>>> isotime('13:00:00')
datetime.time(13, 0)
>>> isotime('12:00:99')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "petl/util.py", line 1046, in parser
    
  File "/usr/lib/python2.7/_strptime.py", line 328, in _strptime
    data_string[found.end():])
ValueError: unconverted data remains: 9
>>> isotime('25:00:00')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "petl/util.py", line 1046, in parser
    
  File "/usr/lib/python2.7/_strptime.py", line 325, in _strptime
    (data_string, format))
ValueError: time data '25:00:00' does not match format '%H:%M:%S'

Can be used with parsecounts(), e.g.:

>>> from petl import look, parsecounts, timeparser
>>> table = [['when', 'who'],
...          ['00:00:00', 'Alex'],
...          ['12:02:45', 'Gloria'],
...          ['25:01:01', 'Marty'],
...          ['09:70:00', 'Melman']]
>>> parsers={'time': timeparser('%H:%M:%S')}
>>> look(parsecounts(table, 'when', parsers))
+--------+---------+----------+
| 'type' | 'count' | 'errors' |
+========+=========+==========+
| 'time' | 2       | 2        |
+--------+---------+----------+
petl.datetimeparser(format)

Return a function to parse strings as datetime.datetime objects using a given format. E.g.:

>>> from petl import datetimeparser
>>> isodatetime = datetimeparser('%Y-%m-%dT%H:%M:%S')
>>> isodatetime('2002-12-25T00:00:00')
datetime.datetime(2002, 12, 25, 0, 0)
>>> isodatetime('2002-12-25T00:00:99')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "petl/util.py", line 1018, in parser
    return datetime.strptime(value.strip(), format)
  File "/usr/lib/python2.7/_strptime.py", line 328, in _strptime
    data_string[found.end():])
ValueError: unconverted data remains: 9

Can be used with parsecounts(), e.g.:

>>> from petl import look, parsecounts, datetimeparser
>>> table = [['when', 'who'],
...          ['2002-12-25T00:00:00', 'Alex'],
...          ['2004-09-12T01:10:11', 'Gloria'],
...          ['2002-13-25T00:00:00', 'Marty'],
...          ['2002-02-30T07:09:00', 'Melman']]
>>> parsers={'datetime': datetimeparser('%Y-%m-%dT%H:%M:%S')}
>>> look(parsecounts(table, 'when', parsers))
+------------+---------+----------+
| 'type'     | 'count' | 'errors' |
+============+=========+==========+
| 'datetime' | 2       | 2        |
+------------+---------+----------+
petl.boolparser(true_strings=['true', 't', 'yes', 'y', '1'], false_strings=['false', 'f', 'no', 'n', '0'], case_sensitive=False)

Return a function to parse strings as bool objects using a given set of string representations for True and False. E.g.:

>>> from petl import boolparser    
>>> mybool = boolparser(true_strings=['yes', 'y'], false_strings=['no', 'n'])
>>> mybool('y')
True
>>> mybool('Y')
True
>>> mybool('yes')
True
>>> mybool('No')
False
>>> mybool('nO')
False
>>> mybool('true')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "petl/util.py", line 1175, in parser
    raise ValueError('value is not one of recognised boolean strings: %r' % value)
ValueError: value is not one of recognised boolean strings: 'true'
>>> mybool('foo')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "petl/util.py", line 1175, in parser
    raise ValueError('value is not one of recognised boolean strings: %r' % value)
ValueError: value is not one of recognised boolean strings: 'foo'

Can be used with parsecounts(), e.g.:

>>> from petl import look, parsecounts, boolparser
>>> table = [['who', 'vote'],
...          ['Alex', 'yes'],
...          ['Gloria', 'N'],
...          ['Marty', 'hmmm'],
...          ['Melman', 'nope']]
>>> mybool = boolparser(true_strings=['yes', 'y'], false_strings=['no', 'n'])
>>> parsers = {'bool': mybool}
>>> look(parsecounts(table, 'vote', parsers))
+--------+---------+----------+
| 'type' | 'count' | 'errors' |
+========+=========+==========+
| 'bool' | 2       | 2        |
+--------+---------+----------+
petl.lookup(table, keyspec, valuespec=None, dictionary=None)

Load a dictionary with data from the given table. E.g.:

>>> from petl import lookup
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['b', 3]]
>>> lkp = lookup(table, 'foo', 'bar')
>>> lkp['a']
[1]
>>> lkp['b']
[2, 3]

If no valuespec argument is given, defaults to the whole row (as a tuple), e.g.:

>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['b', 3]]
>>> lkp = lookup(table, 'foo')
>>> lkp['a']
[('a', 1)]
>>> lkp['b']
[('b', 2), ('b', 3)]

Compound keys are supported, e.g.:

>>> t2 = [['foo', 'bar', 'baz'],
...       ['a', 1, True],
...       ['b', 2, False],
...       ['b', 3, True],
...       ['b', 3, False]]
>>> lkp = lookup(t2, ('foo', 'bar'), 'baz')
>>> lkp[('a', 1)]
[True]
>>> lkp[('b', 2)]
[False]
>>> lkp[('b', 3)]
[True, False]

Data can be loaded into an existing dictionary-like object, including persistent dictionaries created via the shelve module, e.g.:

>>> import shelve
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['b', 3]]
>>> lkp = shelve.open('mylookup.dat')
>>> lkp = lookup(table, 'foo', 'bar', lkp)
>>> lkp.close()
>>> exit()
$ python
Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24) 
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import shelve
>>> lkp = shelve.open('mylookup.dat')
>>> lkp['a']
[1]
>>> lkp['b']
[2, 3]
petl.lookupone(table, keyspec, valuespec=None, dictionary=None, strict=True)

Load a dictionary with data from the given table, assuming there is at most one value for each key. E.g.:

>>> from petl import lookupone
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['c', 2]]
>>> lkp = lookupone(table, 'foo', 'bar')
>>> lkp['a']
1
>>> lkp['b']
2
>>> lkp['c']
2

If the specified key is not unique, will raise DuplicateKeyError, e.g.:

>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['b', 3]]
>>> lkp = lookupone(table, 'foo')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "petl/util.py", line 451, in lookupone
petl.util.DuplicateKeyError

Unique checks can be overridden by providing strict=False, in which case the last value wins, e.g.:

>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['b', 3]]
>>> lkp = lookupone(table, 'foo', 'bar', strict=False)
>>> lkp['a']
1
>>> lkp['b']
3

Compound keys are supported, e.g.:

>>> t2 = [['foo', 'bar', 'baz'],
...       ['a', 1, True],
...       ['b', 2, False],
...       ['b', 3, True]]
>>> lkp = lookupone(t2, ('foo', 'bar'), 'baz')
>>> lkp[('a', 1)]
True
>>> lkp[('b', 2)]
False
>>> lkp[('b', 3)]
True

Data can be loaded into an existing dictionary-like object, including persistent dictionaries created via the shelve module, e.g.:

>>> from petl import lookupone
>>> import shelve
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['c', 2]]
>>> lkp = shelve.open('mylookupone.dat')
>>> lkp = lookupone(table, 'foo', 'bar', dictionary=lkp)
>>> lkp.close()
>>> exit()
$ python
Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24) 
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import shelve
>>> lkp = shelve.open('mylookupone.dat')
>>> lkp['a']
1
>>> lkp['b']
2
>>> lkp['c']
2
petl.recordlookup(table, keyspec, dictionary=None)

Load a dictionary with data from the given table, mapping to records. E.g.:

>>> from petl import recordlookup
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['b', 3]]
>>> lkp = recordlookup(table, 'foo')
>>> lkp['a']
[{'foo': 'a', 'bar': 1}]
>>> lkp['b']
[{'foo': 'b', 'bar': 2}, {'foo': 'b', 'bar': 3}]

Compound keys are supported, e.g.:

>>> t2 = [['foo', 'bar', 'baz'],
...       ['a', 1, True],
...       ['b', 2, False],
...       ['b', 3, True],
...       ['b', 3, False]]
>>> lkp = recordlookup(t2, ('foo', 'bar'))
>>> lkp[('a', 1)]
[{'baz': True, 'foo': 'a', 'bar': 1}]
>>> lkp[('b', 2)]
[{'baz': False, 'foo': 'b', 'bar': 2}]
>>> lkp[('b', 3)]
[{'baz': True, 'foo': 'b', 'bar': 3}, {'baz': False, 'foo': 'b', 'bar': 3}]

Data can be loaded into an existing dictionary-like object, including persistent dictionaries created via the shelve module, e.g.:

>>> import shelve
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['b', 3]]
>>> lkp = shelve.open('myrecordlookup.dat')
>>> lkp = recordlookup(table, 'foo', dictionary=lkp)
>>> lkp.close()
>>> exit()
$ python
Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24) 
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import shelve
>>> lkp = shelve.open('myrecordlookup.dat')
>>> lkp['a']
[{'foo': 'a', 'bar': 1}]
>>> lkp['b']
[{'foo': 'b', 'bar': 2}, {'foo': 'b', 'bar': 3}]
petl.recordlookupone(table, keyspec, dictionary=None, strict=True)

Load a dictionary with data from the given table, mapping to records, assuming there is at most one record for each key. E.g.:

>>> from petl import recordlookupone
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['c', 2]]
>>> lkp = recordlookupone(table, 'foo')
>>> lkp['a']
{'foo': 'a', 'bar': 1}
>>> lkp['b']
{'foo': 'b', 'bar': 2}
>>> lkp['c']
{'foo': 'c', 'bar': 2}

If the specified key is not unique, will raise DuplicateKeyError, e.g.:

>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['b', 3]]
>>> lkp = recordlookupone(table, 'foo')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "petl/util.py", line 451, in lookupone
petl.util.DuplicateKeyError

Unique checks can be overridden by providing strict=False, in which case the last record wins, e.g.:

>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['b', 3]]
>>> lkp = recordlookupone(table, 'foo', strict=False)
>>> lkp['a']
{'foo': 'a', 'bar': 1}
>>> lkp['b']
{'foo': 'b', 'bar': 3}

Compound keys are supported, e.g.:

>>> t2 = [['foo', 'bar', 'baz'],
...       ['a', 1, True],
...       ['b', 2, False],
...       ['b', 3, True]]
>>> lkp = recordlookupone(t2, ('foo', 'bar'), strict=False)
>>> lkp[('a', 1)]
{'baz': True, 'foo': 'a', 'bar': 1}
>>> lkp[('b', 2)]
{'baz': False, 'foo': 'b', 'bar': 2}
>>> lkp[('b', 3)]
{'baz': True, 'foo': 'b', 'bar': 3}

Data can be loaded into an existing dictionary-like object, including persistent dictionaries created via the shelve module, e.g.:

>>> import shelve
>>> lkp = shelve.open('myrecordlookupone.dat')
>>> table = [['foo', 'bar'], ['a', 1], ['b', 2], ['c', 2]]
>>> lkp = recordlookupone(table, 'foo', dictionary=lkp)
>>> lkp.close()
>>> exit()
$ python
Python 2.7.1+ (r271:86832, Apr 11 2011, 18:05:24) 
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import shelve
>>> lkp = shelve.open('myrecordlookupone.dat')
>>> lkp['a']
{'foo': 'a', 'bar': 1}
>>> lkp['b']
{'foo': 'b', 'bar': 2}
>>> lkp['c']
{'foo': 'c', 'bar': 2}
petl.statsum(filename)

Compute a crude checksum of the file by hashing the file’s absolute path name, the file size, and the file’s time of last modification. N.B., on some systems this will give a 1s resolution, i.e., any changes to a file within the same second that preserve the file size will not change the result.

petl.adler32sum(filename)

Compute the Adler 32 checksum of the file at the given location. Returns the checksum as an integer, use hex(result) to view as hexadecimal.

petl.crc32sum(filename)

Compute the CRC32 checksum of the file at the given location. Returns the checksum as an integer, use hex(result) to view as hexadecimal.

petl.expr(s)

Construct a function operating on a record (i.e., a dictionary representation of a data row, indexed by field name).

The expression string is converted into a lambda function by prepending the string with 'lambda rec: ', then replacing anything enclosed in curly braces (e.g., "{foo}") with a lookup on the record (e.g., "rec['foo']"), then finally calling eval().

So, e.g., the expression string "{foo} * {bar}" is converted to the function lambda rec: rec['foo'] * rec['bar']

petl.strjoin(s)

Return a function to join sequences using s as the separator.

Extracting (reading) tables from files and databases

petl.fromcsv(filename, checksumfun=None, **kwargs)

Wrapper for the standard csv.reader() function. Returns a table providing access to the data in the given delimited file. The filename argument is the path of the delimited file, all other keyword arguments are passed to csv.reader(). E.g.:

>>> import csv
>>> # set up a CSV file to demonstrate with
... with open('test.csv', 'wb') as f:
...     writer = csv.writer(f, delimiter='\t')
...     writer.writerow(['foo', 'bar'])
...     writer.writerow(['a', 1])
...     writer.writerow(['b', 2])
...     writer.writerow(['c', 2])
...
>>> # now demonstrate the use of petl.fromcsv
... from petl import fromcsv, look
>>> testcsv = fromcsv('test.csv', delimiter='\t')
>>> look(testcsv)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | '1'   |
+-------+-------+
| 'b'   | '2'   |
+-------+-------+
| 'c'   | '2'   |
+-------+-------+

Note that all data values are strings, and any intended numeric values will need to be converted, see also convert().

The returned table object implements the cachetag() method. If the checksumfun argument is not given, the default checksum function (whatever petl.io.defaultsumfun is currently set to) will be used to calculate cachetag values.

petl.frompickle(filename, checksumfun=None)

Returns a table providing access to the data pickled in the given file. The rows in the table should have been pickled to the file one at a time. E.g.:

>>> import pickle
>>> # set up a file to demonstrate with
... with open('test.dat', 'wb') as f:
...     pickle.dump(['foo', 'bar'], f)
...     pickle.dump(['a', 1], f)
...     pickle.dump(['b', 2], f)
...     pickle.dump(['c', 2.5], f)
...
>>> # now demonstrate the use of petl.frompickle
... from petl import frompickle, look
>>> testdat = frompickle('test.dat')
>>> look(testdat)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 1     |
+-------+-------+
| 'b'   | 2     |
+-------+-------+
| 'c'   | 2.5   |
+-------+-------+

The returned table object implements the cachetag() method. If the checksumfun argument is not given, the default checksum function (whatever petl.io.defaultsumfun is currently set to) will be used to calculate cachetag values.

petl.fromsqlite3(filename, query, checksumfun=None)

Provides access to data from an sqlite3 database file via a given query. E.g.:

>>> import sqlite3
>>> from petl import look, fromsqlite3    
>>> # initial data
>>> data = [['a', 1],
...         ['b', 2],
...         ['c', 2.0]]
>>> connection = sqlite3.connect('test.db')
>>> c = connection.cursor()
>>> c.execute('create table foobar (foo, bar)')
<sqlite3.Cursor object at 0x2240b90>
>>> for row in data:
...     c.execute('insert into foobar values (?, ?)', row)
... 
<sqlite3.Cursor object at 0x2240b90>
<sqlite3.Cursor object at 0x2240b90>
<sqlite3.Cursor object at 0x2240b90>
>>> connection.commit()
>>> c.close()
>>> # demonstrate the petl.fromsqlite3 function
... foobar = fromsqlite3('test.db', 'select * from foobar')
>>> look(foobar)    
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| u'a'  | 1     |
+-------+-------+
| u'b'  | 2     |
+-------+-------+
| u'c'  | 2.0   |
+-------+-------+

The returned table object implements the cachetag() method. If the checksumfun argument is not given, the default checksum function (whatever petl.io.defaultsumfun is currently set to) will be used to calculate cachetag values.

petl.fromdb(connection, query)

Provides access to data from any DB-API 2.0 connection via a given query. E.g., using sqlite3:

>>> import sqlite3
>>> from petl import look, fromdb
>>> connection = sqlite3.connect('test.db')
>>> table = fromdb(connection, 'select * from foobar')
>>> look(table)

E.g., using psycopg2 (assuming you’ve installed it first):

>>> import psycopg2
>>> from petl import look, fromdb
>>> connection = psycopg2.connect("dbname=test user=postgres")
>>> table = fromdb(connection, 'select * from test')
>>> look(table)

E.g., using MySQLdb (assuming you’ve installed it first):

>>> import MySQLdb
>>> from petl import look, fromdb
>>> connection = MySQLdb.connect(passwd="moonpie", db="thangs")
>>> table = fromdb(connection, 'select * from test')
>>> look(table)

The returned table object does not implement the cachetag() method.

petl.fromtext(filename, header=['lines'], checksumfun=None)

Construct a table from lines in the given text file. E.g.:

>>> # example data
... with open('test.txt', 'w') as f:
...     f.write('a\t1\n')
...     f.write('b\t2\n')
...     f.write('c\t3\n')
... 
>>> from petl import fromtext, look
>>> table1 = fromtext('test.txt')
>>> look(table1)
+--------------+
| 'lines'      |
+==============+
| 'a\t1\n'     |
+--------------+
| 'b\t2\n'     |
+--------------+
| 'c\t3\n'     |
+--------------+

Provides a starting point for custom handling of text files. E.g., using capture():

>>> from petl import capture
>>> table2 = capture(table1, 'lines', '(.*)\\t(.*)$', ['foo', 'bar'])
>>> look(table2)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | '1'   |
+-------+-------+
| 'b'   | '2'   |
+-------+-------+
| 'c'   | '3'   |
+-------+-------+

Transforming tables

petl.rename(table, spec={})

Replace one or more fields in the table’s header row. E.g.:

>>> from petl import look, rename
>>> tbl1 = [['sex', 'age'],
...         ['M', 12],
...         ['F', 34],
...         ['-', 56]]
>>> tbl2 = rename(tbl1, {'sex': 'gender', 'age': 'age_years'})
>>> look(tbl2)
+----------+-------------+
| 'gender' | 'age_years' |
+==========+=============+
| 'M'      | 12          |
+----------+-------------+
| 'F'      | 34          |
+----------+-------------+
| '-'      | 56          |
+----------+-------------+

The returned table object can also be used to modify the field mapping, using the suffix notation, e.g.:

>>> tbl1 = [['sex', 'age'],
...         ['M', 12],
...         ['F', 34],
...         ['-', 56]]
>>> tbl2 = rename(tbl1)
>>> look(tbl2)
+-------+-------+
| 'sex' | 'age' |
+=======+=======+
| 'M'   | 12    |
+-------+-------+
| 'F'   | 34    |
+-------+-------+
| '-'   | 56    |
+-------+-------+

>>> tbl2['sex'] = 'gender'
>>> look(tbl2)
+----------+-------+
| 'gender' | 'age' |
+==========+=======+
| 'M'      | 12    |
+----------+-------+
| 'F'      | 34    |
+----------+-------+
| '-'      | 56    |
+----------+-------+
petl.setheader(table, fields)

Override fields in the given table. E.g.:

>>> from petl import setheader, look
>>> table1 = [['foo', 'bar'],
...           ['a', 1],
...           ['b', 2]]
>>> table2 = setheader(table1, ['foofoo', 'barbar'])
>>> look(table2)
+----------+----------+
| 'foofoo' | 'barbar' |
+==========+==========+
| 'a'      | 1        |
+----------+----------+
| 'b'      | 2        |
+----------+----------+
petl.extendheader(table, fields)

Extend fields in the given table. E.g.:

>>> from petl import extendheader, look
>>> table1 = [['foo'],
...           ['a', 1, True],
...           ['b', 2, False]]
>>> table2 = extendheader(table1, ['bar', 'baz'])
>>> look(table2)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'a'   | 1     | True  |
+-------+-------+-------+
| 'b'   | 2     | False |
+-------+-------+-------+
petl.pushheader(table, fields)

Push rows down and prepend a header row. E.g.:

>>> from petl import pushheader, look    
>>> table1 = [['a', 1],
...           ['b', 2]]
>>> table2 = pushheader(table1, ['foo', 'bar'])
>>> look(table2)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 1     |
+-------+-------+
| 'b'   | 2     |
+-------+-------+

Useful, e.g., where data are from a CSV file that has not included a header row.

petl.skip(table, n)

Skip n rows (including the header row). E.g.:

>>> from petl import skip, look
>>> table1 = [['#aaa', 'bbb', 'ccc'],
...           ['#mmm'],
...           ['foo', 'bar'],
...           ['a', 1],
...           ['b', 2]]
>>> table2 = skip(table1, 2)
>>> look(table2)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 1     |
+-------+-------+
| 'b'   | 2     |
+-------+-------+
petl.rowslice(table, *sliceargs)

Choose a subsequence of data rows. E.g.:

>>> from petl import rowslice, look
>>> table1 = [['foo', 'bar'],
...           ['a', 1],
...           ['b', 2],
...           ['c', 5],
...           ['d', 7],
...           ['f', 42]]
>>> table2 = rowslice(table1, 2)
>>> look(table2)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 1     |
+-------+-------+
| 'b'   | 2     |
+-------+-------+

>>> table3 = rowslice(table1, 1, 4)
>>> look(table3)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'b'   | 2     |
+-------+-------+
| 'c'   | 5     |
+-------+-------+
| 'd'   | 7     |
+-------+-------+

>>> table4 = rowslice(table1, 0, 5, 2)
>>> look(table4)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 1     |
+-------+-------+
| 'c'   | 5     |
+-------+-------+
| 'f'   | 42    |
+-------+-------+

Changed in version 0.3.

Positional arguments can be used to slice the data rows. The sliceargs are passed to itertools.islice().

petl.head(table, n)

Choose the first n data rows. E.g.:

>>> from petl import head, look    
>>> table1 = [['foo', 'bar'],
...           ['a', 1],
...           ['b', 2],
...           ['c', 5],
...           ['d', 7],
...           ['f', 42],
...           ['f', 3],
...           ['h', 90],
...           ['k', 12],
...           ['l', 77],
...           ['q', 2]]
>>> table2 = head(table1, 4)
>>> look(table2)    
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 1     |
+-------+-------+
| 'b'   | 2     |
+-------+-------+
| 'c'   | 5     |
+-------+-------+
| 'd'   | 7     |
+-------+-------+

Syntactic sugar: equivalent to rowslice(table, n).

petl.tail(table, n)

Choose the last n data rows. E.g.:

>>> from petl import tail, look    
>>> table1 = [['foo', 'bar'],
...           ['a', 1],
...           ['b', 2],
...           ['c', 5],
...           ['d', 7],
...           ['f', 42],
...           ['f', 3],
...           ['h', 90],
...           ['k', 12],
...           ['l', 77],
...           ['q', 2]]
>>> table2 = tail(table1, 4)
>>> look(table2)    
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'h'   | 90    |
+-------+-------+
| 'k'   | 12    |
+-------+-------+
| 'l'   | 77    |
+-------+-------+
| 'q'   | 2     |
+-------+-------+
petl.cut(table, *args, **kwargs)

Choose and/or re-order columns. E.g.:

>>> from petl import look, cut    
>>> table1 = [['foo', 'bar', 'baz'],
...           ['A', 1, 2.7],
...           ['B', 2, 3.4],
...           ['B', 3, 7.8],
...           ['D', 42, 9.0],
...           ['E', 12]]
>>> table2 = cut(table1, 'foo', 'baz')
>>> look(table2)
+-------+-------+
| 'foo' | 'baz' |
+=======+=======+
| 'A'   | 2.7   |
+-------+-------+
| 'B'   | 3.4   |
+-------+-------+
| 'B'   | 7.8   |
+-------+-------+
| 'D'   | 9.0   |
+-------+-------+
| 'E'   | None  |
+-------+-------+

Note that any short rows will be padded with None values (or whatever is provided via the missing keyword argument).

Fields can also be specified by index, starting from zero. E.g.:

>>> table3 = cut(table1, 0, 2)
>>> look(table3)
+-------+-------+
| 'foo' | 'baz' |
+=======+=======+
| 'A'   | 2.7   |
+-------+-------+
| 'B'   | 3.4   |
+-------+-------+
| 'B'   | 7.8   |
+-------+-------+
| 'D'   | 9.0   |
+-------+-------+
| 'E'   | None  |
+-------+-------+

Field names and indices can be mixed, e.g.:

>>> table4 = cut(table1, 'bar', 0)
>>> look(table4)
+-------+-------+
| 'bar' | 'foo' |
+=======+=======+
| 1     | 'A'   |
+-------+-------+
| 2     | 'B'   |
+-------+-------+
| 3     | 'B'   |
+-------+-------+
| 42    | 'D'   |
+-------+-------+
| 12    | 'E'   |
+-------+-------+

Use the standard range() runction to select a range of fields, e.g.:

>>> table5 = cut(table1, *range(0, 2))
>>> look(table5)    
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'A'   | 1     |
+-------+-------+
| 'B'   | 2     |
+-------+-------+
| 'B'   | 3     |
+-------+-------+
| 'D'   | 42    |
+-------+-------+
| 'E'   | 12    |
+-------+-------+

See also cutout().

petl.cutout(table, *args, **kwargs)

Remove fields. E.g.:

>>> from petl import cutout, look
>>> table1 = (('foo', 'bar', 'baz'),
...           ('A', 1, 2),
...           ('B', '2', '3.4'),
...           (u'B', u'3', u'7.8', True),
...           ('D', 'xyz', 9.0),
...           ('E', None))
>>> table2 = cutout(table1, 'bar')
>>> look(table2)
+-------+--------+
| 'foo' | 'baz'  |
+=======+========+
| 'A'   | 2      |
+-------+--------+
| 'B'   | '3.4'  |
+-------+--------+
| u'B'  | u'7.8' |
+-------+--------+
| 'D'   | 9.0    |
+-------+--------+
| 'E'   | None   |
+-------+--------+

See also cut().

New in version 0.3.

petl.select(table, *args, **kwargs)

Select rows meeting a condition. The second positional argument can be a function accepting a record (i.e., a dictionary representation of a row) e.g.:

>>> from petl import select, look     
>>> table1 = [['foo', 'bar', 'baz'],
...           ['a', 4, 9.3],
...           ['a', 2, 88.2],
...           ['b', 1, 23.3],
...           ['c', 8, 42.0],
...           ['d', 7, 100.9],
...           ['c', 2]]
>>> table2 = select(table1, lambda rec: rec['foo'] == 'a' and rec['baz'] > 88.1)
>>> look(table2)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'a'   | 2     | 88.2  |
+-------+-------+-------+

The second positional argument can also be an expression string, which will be converted to a function using expr(), e.g.:

>>> table3 = select(table1, "{foo} == 'a' and {baz} > 88.1")
>>> look(table3)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'a'   | 2     | 88.2  |
+-------+-------+-------+

The condition can also be applied to a single field, e.g.:

>>> table4 = select(table1, 'foo', lambda v: v == 'a')
>>> look(table4)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'a'   | 4     | 9.3   |
+-------+-------+-------+
| 'a'   | 2     | 88.2  |
+-------+-------+-------+
petl.selectop(table, field, value, op)

Select rows where the function op applied to the given field and the given value returns true.

petl.selecteq(table, field, value)

Select rows where the given field equals the given value.

petl.selectne(table, field, value)

Select rows where the given field does not equal the given value.

petl.selectlt(table, field, value)

Select rows where the given field is less than the given value.

petl.selectle(table, field, value)

Select rows where the given field is less than or equal to the given value.

petl.selectgt(table, field, value)

Select rows where the given field is greater than the given value.

petl.selectge(table, field, value)

Select rows where the given field is greater than or equal to the given value.

petl.selectrangeopen(table, field, minv, maxv)

Select rows where the given field is greater than or equal to minv and less than or equal to maxv.

petl.selectrangeopenleft(table, field, minv, maxv)

Select rows where the given field is greater than or equal to minv and less than maxv.

petl.selectrangeopenright(table, field, minv, maxv)

Select rows where the given field is greater than minv and less than or equal to maxv.

petl.selectrangeclosed(table, field, minv, maxv)

Select rows where the given field is greater than minv and less than maxv.

petl.selectin(table, field, value)

Select rows where the given field is a member of the given value.

petl.selectnotin(table, field, value)

Select rows where the given field is not a member of the given value.

petl.selectis(table, field, value)

Select rows where the given field is the given value.

petl.selectisnot(table, field, value)

Select rows where the given field is not the given value.

petl.selectisinstance(table, field, value)

Select rows where the given field is an instance of the given type.

petl.selectre(table, field, pattern, flags=0)

Select rows where a regular expression search using the given pattern on the given field returns a match. E.g.:

>>> from petl import selectre, look    
>>> table1 = (('foo', 'bar', 'baz'),
...           ('aa', 4, 9.3),
...           ('aaa', 2, 88.2),
...           ('b', 1, 23.3),
...           ('ccc', 8, 42.0),
...           ('bb', 7, 100.9),
...           ('c', 2))
>>> table2 = selectre(table1, 'foo', '[ab]{2}')
>>> look(table2)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'aa'  | 4     | 9.3   |
+-------+-------+-------+
| 'aaa' | 2     | 88.2  |
+-------+-------+-------+
| 'bb'  | 7     | 100.9 |
+-------+-------+-------+

See also re.search().

petl.rowselect(table, where)

Select rows matching a condition. The where argument should be a function accepting a row (list or tuple) as argument and returning True or False.

petl.recordselect(table, where, missing=None)

Select rows matching a condition. The where argument should be a function accepting a record (row as dictionary of values indexed by field name) as argument and returning True or False.

petl.rowlenselect(table, n)

Select rows of length n.

petl.fieldselect(table, field, where)

Select rows matching a condition. The where argument should be a function accepting a single data value as argument and returning True or False.

petl.facet(table, field)

Return a dictionary mapping field values to tables. E.g.:

>>> from petl import facet, look
>>> table1 = [['foo', 'bar', 'baz'],
...           ['a', 4, 9.3],
...           ['a', 2, 88.2],
...           ['b', 1, 23.3],
...           ['c', 8, 42.0],
...           ['d', 7, 100.9],
...           ['c', 2]]
>>> foo = facet(table1, 'foo')
>>> foo.keys()
['a', 'c', 'b', 'd']
>>> look(foo['a'])
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'a'   | 4     | 9.3   |
+-------+-------+-------+
| 'a'   | 2     | 88.2  |
+-------+-------+-------+

>>> look(foo['c'])
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'c'   | 8     | 42.0  |
+-------+-------+-------+
| 'c'   | 2     |       |
+-------+-------+-------+
petl.rangefacet(table, field, width, minv=None, maxv=None, presorted=False, buffersize=None)

Return a dictionary mapping ranges to tables. E.g.:

>>> from petl import rangefacet, look
>>> table1 = [['foo', 'bar'],
...           ['a', 3],
...           ['a', 7],
...           ['b', 2],
...           ['b', 1],
...           ['b', 9],
...           ['c', 4],
...           ['d', 3]]
>>> rf = rangefacet(table1, 'bar', 2)
>>> rf.keys()
[(1, 3), (3, 5), (5, 7), (7, 9)]
>>> look(rf[(1, 3)])
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'b'   | 2     |
+-------+-------+
| 'b'   | 1     |
+-------+-------+

>>> look(rf[(7, 9)])
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 7     |
+-------+-------+
| 'b'   | 9     |
+-------+-------+

Note that the last bin includes both edges.

petl.convert(table, field, *args, **kwargs)

Transform values under the given field via an arbitrary function or method invocation. E.g., using the built-in float() function:

>>> from petl import convert, look
>>> table1 = [['foo', 'bar'],
...           ['A', '2.4'],
...           ['B', '5.7'],
...           ['C', '1.2'],
...           ['D', '8.3']]
>>> table2 = convert(table1, 'bar', float)
>>> look(table2)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'A'   | 2.4   |
+-------+-------+
| 'B'   | 5.7   |
+-------+-------+
| 'C'   | 1.2   |
+-------+-------+
| 'D'   | 8.3   |
+-------+-------+

E.g., using a lambda function:

>>> table3 = convert(table2, 'bar', lambda v: v**2)
>>> look(table3)    
+-------+-------------------+
| 'foo' | 'bar'             |
+=======+===================+
| 'A'   | 5.76              |
+-------+-------------------+
| 'B'   | 32.49             |
+-------+-------------------+
| 'C'   | 1.44              |
+-------+-------------------+
| 'D'   | 68.89000000000001 |
+-------+-------------------+

A method of the data value can also be invoked by passing the method name. E.g.:

>>> table4 = convert(table1, 'foo', 'lower')
>>> look(table4)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | '2.4' |
+-------+-------+
| 'b'   | '5.7' |
+-------+-------+
| 'c'   | '1.2' |
+-------+-------+
| 'd'   | '8.3' |
+-------+-------+

Arguments to the method invocation can also be given, e.g.:

>>> table5 = convert(table4, 'foo', 'replace', 'a', 'aa')
>>> look(table5)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'aa'  | '2.4' |
+-------+-------+
| 'b'   | '5.7' |
+-------+-------+
| 'c'   | '1.2' |
+-------+-------+
| 'd'   | '8.3' |
+-------+-------+

Useful for, among other things, string manipulation, see also the methods on the str type.

Values can also be translated via a dictionary, e.g.:

>>> table6 = [['gender', 'age'],
...           ['M', 12],
...           ['F', 34],
...           ['-', 56]]
>>> table7 = convert(table6, 'gender', {'M': 'male', 'F': 'female'})
>>> look(table7)
+----------+-------+
| 'gender' | 'age' |
+==========+=======+
| 'male'   | 12    |
+----------+-------+
| 'female' | 34    |
+----------+-------+
| '-'      | 56    |
+----------+-------+

Note that the field argument can be a list or tuple of fields, in which case the conversion will be applied to all of the fields given.

petl.fieldconvert(table, converters=None, failonerror=False, errorvalue=None)

Transform values in one or more fields via functions or method invocations. E.g.:

>>> from petl import fieldconvert, look    
>>> table1 = [['foo', 'bar'],
...           ['1', '2.4'],
...           ['3', '7.9'],
...           ['7', '2'],
...           ['8.3', '42.0'],
...           ['2', 'abc']]
>>> table2 = fieldconvert(table1, {'foo': int, 'bar': float})
>>> look(table2)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 1     | 2.4   |
+-------+-------+
| 3     | 7.9   |
+-------+-------+
| 7     | 2.0   |
+-------+-------+
| None  | 42.0  |
+-------+-------+
| 2     | None  |
+-------+-------+

Converters can also be added or updated using the suffix notation on the returned table object. E.g.:

>>> table3 = [['foo', 'bar', 'baz'],
...           ['1', '2.4', 14],
...           ['3', '7.9', 47],
...           ['7', '2', 11],
...           ['8.3', '42.0', 33],
...           ['2', 'abc', 'xyz']]
>>> table4 = fieldconvert(table3)
>>> look(table4)
+-------+--------+-------+
| 'foo' | 'bar'  | 'baz' |
+=======+========+=======+
| '1'   | '2.4'  | 14    |
+-------+--------+-------+
| '3'   | '7.9'  | 47    |
+-------+--------+-------+
| '7'   | '2'    | 11    |
+-------+--------+-------+
| '8.3' | '42.0' | 33    |
+-------+--------+-------+
| '2'   | 'abc'  | 'xyz' |
+-------+--------+-------+

>>> table4['foo'] = int
>>> table4['bar'] = float
>>> table4['baz'] = lambda v: v**2
>>> look(table4)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 1     | 2.4   | 196   |
+-------+-------+-------+
| 3     | 7.9   | 2209  |
+-------+-------+-------+
| 7     | 2.0   | 121   |
+-------+-------+-------+
| None  | 42.0  | 1089  |
+-------+-------+-------+
| 2     | None  | None  |
+-------+-------+-------+

Converters can be functions, method names, or method names with arguments. E.g.:

>>> table5 = [['foo', 'bar', 'baz'],
...           ['2', 'A', 'x'],
...           ['5', 'B', 'y'],
...           ['1', 'C', 'y'],
...           ['8.3', 'D', 'z']]
>>> table6 = fieldconvert(table5)
>>> table6['foo'] = int
>>> table6['bar'] = 'lower'
>>> table6['baz'] = 'replace', 'y', 'yy'
>>> look(table6)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 2     | 'a'   | 'x'   |
+-------+-------+-------+
| 5     | 'b'   | 'yy'  |
+-------+-------+-------+
| 1     | 'c'   | 'yy'  |
+-------+-------+-------+
| None  | 'd'   | 'z'   |
+-------+-------+-------+

Converters can also be dictionaries, which will be used to translate values under the specified field.

petl.extend(table, field, value, failonerror=False, errorvalue=None)

Extend a table with a fixed value or calculated field. E.g., using a fixed value:

>>> from petl import extend, look
>>> table1 = [['foo', 'bar'],
...           ['M', 12],
...           ['F', 34],
...           ['-', 56]]
>>> table2 = extend(table1, 'baz', 42)
>>> look(table2)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'M'   | 12    | 42    |
+-------+-------+-------+
| 'F'   | 34    | 42    |
+-------+-------+-------+
| '-'   | 56    | 42    |
+-------+-------+-------+

E.g., calculating the value:

>>> table2 = extend(table1, 'baz', lambda rec: rec['bar'] * 2)
>>> look(table2)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'M'   | 12    | 24    |
+-------+-------+-------+
| 'F'   | 34    | 68    |
+-------+-------+-------+
| '-'   | 56    | 112   |
+-------+-------+-------+

When using a calculated value, the function should accept a record, i.e., a dictionary representation of the row, with values indexed by field name.

An expression string can also be used via expr(), e.g.:

>>> table3 = extend(table1, 'baz', expr('{bar} * 2'))
>>> look(table3)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'M'   | 12    | 24    |
+-------+-------+-------+
| 'F'   | 34    | 68    |
+-------+-------+-------+
| '-'   | 56    | 112   |
+-------+-------+-------+
petl.capture(table, field, pattern, newfields=None, include_original=False, flags=0)

Extend the table with one or more new fields with values captured from an existing field searched via a regular expression. E.g.:

>>> from petl import capture, look
>>> table1 = [['id', 'variable', 'value'],
...           ['1', 'A1', '12'],
...           ['2', 'A2', '15'],
...           ['3', 'B1', '18'],
...           ['4', 'C12', '19']]
>>> table2 = capture(table1, 'variable', '(\w)(\d+)', ['treat', 'time'])
>>> look(table2)
+------+---------+---------+--------+
| 'id' | 'value' | 'treat' | 'time' |
+======+=========+=========+========+
| '1'  | '12'    | 'A'     | '1'    |
+------+---------+---------+--------+
| '2'  | '15'    | 'A'     | '2'    |
+------+---------+---------+--------+
| '3'  | '18'    | 'B'     | '1'    |
+------+---------+---------+--------+
| '4'  | '19'    | 'C'     | '12'   |
+------+---------+---------+--------+

See also re.search().

By default the field on which the capture is performed is omitted. It can be included using the include_original argument, e.g.:

>>> table3 = capture(table1, 'variable', '(\w)(\d+)', ['treat', 'time'], include_original=True)
>>> look(table3)
+------+------------+---------+---------+--------+
| 'id' | 'variable' | 'value' | 'treat' | 'time' |
+======+============+=========+=========+========+
| '1'  | 'A1'       | '12'    | 'A'     | '1'    |
+------+------------+---------+---------+--------+
| '2'  | 'A2'       | '15'    | 'A'     | '2'    |
+------+------------+---------+---------+--------+
| '3'  | 'B1'       | '18'    | 'B'     | '1'    |
+------+------------+---------+---------+--------+
| '4'  | 'C12'      | '19'    | 'C'     | '12'   |
+------+------------+---------+---------+--------+
petl.split(table, field, pattern, newfields=None, include_original=False, maxsplit=0, flags=0)

Extend the table with one or more new fields with values generated by splitting an existing value around occurrences of a regular expression. E.g.:

>>> from petl import split, look
>>> table1 = [['id', 'variable', 'value'],
...           ['1', 'parad1', '12'],
...           ['2', 'parad2', '15'],
...           ['3', 'tempd1', '18'],
...           ['4', 'tempd2', '19']]
>>> table2 = split(table1, 'variable', 'd', ['variable', 'day'])
>>> look(table2)
+------+---------+------------+-------+
| 'id' | 'value' | 'variable' | 'day' |
+======+=========+============+=======+
| '1'  | '12'    | 'para'     | '1'   |
+------+---------+------------+-------+
| '2'  | '15'    | 'para'     | '2'   |
+------+---------+------------+-------+
| '3'  | '18'    | 'temp'     | '1'   |
+------+---------+------------+-------+
| '4'  | '19'    | 'temp'     | '2'   |
+------+---------+------------+-------+

See also re.split().

petl.unpack(table, field, newfields=None, maxunpack=None, include_original=False)

Unpack data values that are lists or tuples. E.g.:

>>> from petl import unpack, look    
>>> table1 = [['foo', 'bar'],
...           [1, ['a', 'b']],
...           [2, ['c', 'd']],
...           [3, ['e', 'f']]]
>>> table2 = unpack(table1, 'bar', ['baz', 'quux'])
>>> look(table2)
+-------+-------+--------+
| 'foo' | 'baz' | 'quux' |
+=======+=======+========+
| 1     | 'a'   | 'b'    |
+-------+-------+--------+
| 2     | 'c'   | 'd'    |
+-------+-------+--------+
| 3     | 'e'   | 'f'    |
+-------+-------+--------+
petl.fieldmap(table, mappings=None, failonerror=False, errorvalue=None)

Transform a table, mapping fields arbitrarily between input and output. E.g.:

>>> from petl import fieldmap, look
>>> from collections import OrderedDict
>>> table1 = [['id', 'sex', 'age', 'height', 'weight'],
...           [1, 'male', 16, 1.45, 62.0],
...           [2, 'female', 19, 1.34, 55.4],
...           [3, 'female', 17, 1.78, 74.4],
...           [4, 'male', 21, 1.33, 45.2],
...           [5, '-', 25, 1.65, 51.9]]
>>> mappings = OrderedDict()
>>> # rename a field
... mappings['subject_id'] = 'id'
>>> # translate a field
... mappings['gender'] = 'sex', {'male': 'M', 'female': 'F'}
>>> # apply a calculation to a field
... mappings['age_months'] = 'age', lambda v: v * 12
>>> # apply a calculation to a combination of fields
... mappings['bmi'] = lambda rec: rec['weight'] / rec['height']**2 
>>> # transform and inspect the output
... table2 = fieldmap(table1, mappings)
>>> look(table2)
+--------------+----------+--------------+--------------------+
| 'subject_id' | 'gender' | 'age_months' | 'bmi'              |
+==============+==========+==============+====================+
| 1            | 'M'      | 192          | 29.48870392390012  |
+--------------+----------+--------------+--------------------+
| 2            | 'F'      | 228          | 30.8531967030519   |
+--------------+----------+--------------+--------------------+
| 3            | 'F'      | 204          | 23.481883600555488 |
+--------------+----------+--------------+--------------------+
| 4            | 'M'      | 252          | 25.55260331279326  |
+--------------+----------+--------------+--------------------+
| 5            | '-'      | 300          | 19.0633608815427   |
+--------------+----------+--------------+--------------------+

Field mappings can also be added and/or updated after the table is created via the suffix notation. E.g.:

>>> table3 = fieldmap(table1)
>>> table3['subject_id'] = 'id'
>>> table3['gender'] = 'sex', {'male': 'M', 'female': 'F'}
>>> table3['age_months'] = 'age', lambda v: v * 12
>>> # use an expression string this time
... table3['bmi'] = '{weight} / {height}**2'
>>> look(table3)
+--------------+----------+--------------+--------------------+
| 'subject_id' | 'gender' | 'age_months' | 'bmi'              |
+==============+==========+==============+====================+
| 1            | 'M'      | 192          | 29.48870392390012  |
+--------------+----------+--------------+--------------------+
| 2            | 'F'      | 228          | 30.8531967030519   |
+--------------+----------+--------------+--------------------+
| 3            | 'F'      | 204          | 23.481883600555488 |
+--------------+----------+--------------+--------------------+
| 4            | 'M'      | 252          | 25.55260331279326  |
+--------------+----------+--------------+--------------------+
| 5            | '-'      | 300          | 19.0633608815427   |
+--------------+----------+--------------+--------------------+

Note also that the mapping value can be an expression string, which will be converted to a lambda function via expr().

petl.rowmap(table, rowmapper, fields, failonerror=False)

Transform rows via an arbitrary function. E.g.:

>>> from petl import rowmap, look
>>> table1 = [['id', 'sex', 'age', 'height', 'weight'],
...           [1, 'male', 16, 1.45, 62.0],
...           [2, 'female', 19, 1.34, 55.4],
...           [3, 'female', 17, 1.78, 74.4],
...           [4, 'male', 21, 1.33, 45.2],
...           [5, '-', 25, 1.65, 51.9]]
>>> def rowmapper(row):
...     transmf = {'male': 'M', 'female': 'F'}
...     return [row[0],
...             transmf[row[1]] if row[1] in transmf else row[1],
...             row[2] * 12,
...             row[4] / row[3] ** 2]
... 
>>> table2 = rowmap(table1, rowmapper, fields=['subject_id', 'gender', 'age_months', 'bmi'])  
>>> look(table2)    
+--------------+----------+--------------+--------------------+
| 'subject_id' | 'gender' | 'age_months' | 'bmi'              |
+==============+==========+==============+====================+
| 1            | 'M'      | 192          | 29.48870392390012  |
+--------------+----------+--------------+--------------------+
| 2            | 'F'      | 228          | 30.8531967030519   |
+--------------+----------+--------------+--------------------+
| 3            | 'F'      | 204          | 23.481883600555488 |
+--------------+----------+--------------+--------------------+
| 4            | 'M'      | 252          | 25.55260331279326  |
+--------------+----------+--------------+--------------------+
| 5            | '-'      | 300          | 19.0633608815427   |
+--------------+----------+--------------+--------------------+

The rowmapper function should accept a row (list or tuple) and return a single row (list or tuple).

petl.recordmap(table, recmapper, fields, failonerror=False)

Transform records via an arbitrary function. E.g.:

>>> from petl import recordmap, look
>>> table1 = [['id', 'sex', 'age', 'height', 'weight'],
...           [1, 'male', 16, 1.45, 62.0],
...           [2, 'female', 19, 1.34, 55.4],
...           [3, 'female', 17, 1.78, 74.4],
...           [4, 'male', 21, 1.33, 45.2],
...           [5, '-', 25, 1.65, 51.9]]
>>> def recmapper(rec):
...     transmf = {'male': 'M', 'female': 'F'}
...     return [rec['id'],
...             transmf[rec['sex']] if rec['sex'] in transmf else rec['sex'],
...             rec['age'] * 12,
...             rec['weight'] / rec['height'] ** 2]
... 
>>> table2 = recordmap(table1, recmapper, fields=['subject_id', 'gender', 'age_months', 'bmi'])  
>>> look(table2)
+--------------+----------+--------------+--------------------+
| 'subject_id' | 'gender' | 'age_months' | 'bmi'              |
+==============+==========+==============+====================+
| 1            | 'M'      | 192          | 29.48870392390012  |
+--------------+----------+--------------+--------------------+
| 2            | 'F'      | 228          | 30.8531967030519   |
+--------------+----------+--------------+--------------------+
| 3            | 'F'      | 204          | 23.481883600555488 |
+--------------+----------+--------------+--------------------+
| 4            | 'M'      | 252          | 25.55260331279326  |
+--------------+----------+--------------+--------------------+
| 5            | '-'      | 300          | 19.0633608815427   |
+--------------+----------+--------------+--------------------+

The recmapper function should accept a record (dictionary of data values indexed by field) and return a single row (list or tuple).

petl.rowmapmany(table, rowgenerator, fields, failonerror=False)

Map each input row to any number of output rows via an arbitrary function. E.g.:

>>> from petl import rowmapmany, look    
>>> table1 = [['id', 'sex', 'age', 'height', 'weight'],
...           [1, 'male', 16, 1.45, 62.0],
...           [2, 'female', 19, 1.34, 55.4],
...           [3, '-', 17, 1.78, 74.4],
...           [4, 'male', 21, 1.33]]
>>> def rowgenerator(row):
...     transmf = {'male': 'M', 'female': 'F'}
...     yield [row[0], 'gender', transmf[row[1]] if row[1] in transmf else row[1]]
...     yield [row[0], 'age_months', row[2] * 12]
...     yield [row[0], 'bmi', row[4] / row[3] ** 2]
... 
>>> table2 = rowmapmany(table1, rowgenerator, fields=['subject_id', 'variable', 'value'])  
>>> look(table2)
+--------------+--------------+--------------------+
| 'subject_id' | 'variable'   | 'value'            |
+==============+==============+====================+
| 1            | 'gender'     | 'M'                |
+--------------+--------------+--------------------+
| 1            | 'age_months' | 192                |
+--------------+--------------+--------------------+
| 1            | 'bmi'        | 29.48870392390012  |
+--------------+--------------+--------------------+
| 2            | 'gender'     | 'F'                |
+--------------+--------------+--------------------+
| 2            | 'age_months' | 228                |
+--------------+--------------+--------------------+
| 2            | 'bmi'        | 30.8531967030519   |
+--------------+--------------+--------------------+
| 3            | 'gender'     | '-'                |
+--------------+--------------+--------------------+
| 3            | 'age_months' | 204                |
+--------------+--------------+--------------------+
| 3            | 'bmi'        | 23.481883600555488 |
+--------------+--------------+--------------------+
| 4            | 'gender'     | 'M'                |
+--------------+--------------+--------------------+

The rowgenerator function should accept a row (list or tuple) and yield zero or more rows (lists or tuples).

See also the melt() function.

petl.recordmapmany(table, rowgenerator, fields, failonerror=False)

Map each input row (as a record) to any number of output rows via an arbitrary function. E.g.:

>>> from petl import recordmapmany, look    
>>> table1 = [['id', 'sex', 'age', 'height', 'weight'],
...           [1, 'male', 16, 1.45, 62.0],
...           [2, 'female', 19, 1.34, 55.4],
...           [3, '-', 17, 1.78, 74.4],
...           [4, 'male', 21, 1.33]]
>>> def rowgenerator(rec):
...     transmf = {'male': 'M', 'female': 'F'}
...     yield [rec['id'], 'gender', transmf[rec['sex']] if rec['sex'] in transmf else rec['sex']]
...     yield [rec['id'], 'age_months', rec['age'] * 12]
...     yield [rec['id'], 'bmi', rec['weight'] / rec['height'] ** 2]
... 
>>> table2 = recordmapmany(table1, rowgenerator, fields=['subject_id', 'variable', 'value'])  
>>> look(table2)
+--------------+--------------+--------------------+
| 'subject_id' | 'variable'   | 'value'            |
+==============+==============+====================+
| 1            | 'gender'     | 'M'                |
+--------------+--------------+--------------------+
| 1            | 'age_months' | 192                |
+--------------+--------------+--------------------+
| 1            | 'bmi'        | 29.48870392390012  |
+--------------+--------------+--------------------+
| 2            | 'gender'     | 'F'                |
+--------------+--------------+--------------------+
| 2            | 'age_months' | 228                |
+--------------+--------------+--------------------+
| 2            | 'bmi'        | 30.8531967030519   |
+--------------+--------------+--------------------+
| 3            | 'gender'     | '-'                |
+--------------+--------------+--------------------+
| 3            | 'age_months' | 204                |
+--------------+--------------+--------------------+
| 3            | 'bmi'        | 23.481883600555488 |
+--------------+--------------+--------------------+
| 4            | 'gender'     | 'M'                |
+--------------+--------------+--------------------+

The rowgenerator function should accept a record (dictionary of data values indexed by field) and yield zero or more rows (lists or tuples).

See also the melt() function.

petl.cat(*tables, **kwargs)

Concatenate data from two or more tables. Note that the tables do not need to share exactly the same fields, any missing fields will be padded with None (or whatever is provided via the missing keyword argument). E.g.:

>>> from petl import look, cat    
>>> table1 = [['foo', 'bar'],
...           [1, 'A'],
...           [2, 'B']]
>>> table2 = [['bar', 'baz'],
...           ['C', True],
...           ['D', False]]
>>> table3 = cat(table1, table2)
>>> look(table3)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 1     | 'A'   | None  |
+-------+-------+-------+
| 2     | 'B'   | None  |
+-------+-------+-------+
| None  | 'C'   | True  |
+-------+-------+-------+
| None  | 'D'   | False |
+-------+-------+-------+

This function can also be used to square up a table with uneven rows, e.g.:

>>> table = [['foo', 'bar', 'baz'],
...          ['A', 1, 2],
...          ['B', '2', '3.4'],
...          [u'B', u'3', u'7.8', True],
...          ['D', 'xyz', 9.0],
...          ['E', None]]
>>> look(cat(table))
+-------+-------+--------+
| 'foo' | 'bar' | 'baz'  |
+=======+=======+========+
| 'A'   | 1     | 2      |
+-------+-------+--------+
| 'B'   | '2'   | '3.4'  |
+-------+-------+--------+
| u'B'  | u'3'  | u'7.8' |
+-------+-------+--------+
| 'D'   | 'xyz' | 9.0    |
+-------+-------+--------+
| 'E'   | None  | None   |
+-------+-------+--------+
petl.duplicates(table, key, presorted=False, buffersize=None)

Select rows with duplicate values under a given key. E.g.:

>>> from petl import duplicates, look    
>>> table1 = [['foo', 'bar', 'baz'],
...           ['A', 1, 2.0],
...           ['B', 2, 3.4],
...           ['D', 6, 9.3],
...           ['B', 3, 7.8],
...           ['B', 2, 12.3],
...           ['E', None, 1.3],
...           ['D', 4, 14.5]]
>>> table2 = duplicates(table1, 'foo')
>>> look(table2)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'B'   | 2     | 3.4   |
+-------+-------+-------+
| 'B'   | 3     | 7.8   |
+-------+-------+-------+
| 'B'   | 2     | 12.3  |
+-------+-------+-------+
| 'D'   | 6     | 9.3   |
+-------+-------+-------+
| 'D'   | 4     | 14.5  |
+-------+-------+-------+

Compound keys are supported, e.g.:

>>> table3 = duplicates(table1, key=['foo', 'bar'])
>>> look(table3)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'B'   | 2     | 3.4   |
+-------+-------+-------+
| 'B'   | 2     | 12.3  |
+-------+-------+-------+

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.conflicts(table, key, missing=None, presorted=False, buffersize=None)

Select rows with the same key value but differing in some other field. E.g.:

>>> from petl import conflicts, look    
>>> table1 = [['foo', 'bar', 'baz'],
...           ['A', 1, 2.7],
...           ['B', 2, None],
...           ['D', 3, 9.4],
...           ['B', None, 7.8],
...           ['E', None],
...           ['D', 3, 12.3],
...           ['A', 2, None]]
>>> table2 = conflicts(table1, 'foo')
>>> look(table2)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'A'   | 1     | 2.7   |
+-------+-------+-------+
| 'A'   | 2     | None  |
+-------+-------+-------+
| 'D'   | 3     | 9.4   |
+-------+-------+-------+
| 'D'   | 3     | 12.3  |
+-------+-------+-------+

Missing values are not considered conflicts. By default, None is treated as the missing value, this can be changed via the missing keyword argument.

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.sort(table, key=None, reverse=False, buffersize=None)

Sort the table. E.g.:

>>> from petl import sort, look
>>> table1 = [['foo', 'bar'],
...           ['C', 2],
...           ['A', 9],
...           ['A', 6],
...           ['F', 1],
...           ['D', 10]]
>>> table2 = sort(table1, 'foo')
>>> look(table2)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'A'   | 9     |
+-------+-------+
| 'A'   | 6     |
+-------+-------+
| 'C'   | 2     |
+-------+-------+
| 'D'   | 10    |
+-------+-------+
| 'F'   | 1     |
+-------+-------+

Sorting by compound key is supported, e.g.:

>>> table3 = sort(table1, key=['foo', 'bar'])
>>> look(table3)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'A'   | 6     |
+-------+-------+
| 'A'   | 9     |
+-------+-------+
| 'C'   | 2     |
+-------+-------+
| 'D'   | 10    |
+-------+-------+
| 'F'   | 1     |
+-------+-------+

Field names or indices (from zero) can be used to specify the key.

If no key is specified, the default is a lexical sort, e.g.:

>>> table4 = sort(table1)
>>> look(table4)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'A'   | 6     |
+-------+-------+
| 'A'   | 9     |
+-------+-------+
| 'C'   | 2     |
+-------+-------+
| 'D'   | 10    |
+-------+-------+
| 'F'   | 1     |
+-------+-------+

The buffersize argument should be an int or None.

If the number of rows in the table is less than buffersize, the table will be sorted in memory. Otherwise, the table is sorted in chunks of no more than buffersize rows, each chunk is written to a temporary file, and then a merge sort is performed on the temporary files.

If buffersize is None, the value of petl.transform.defaultbuffersize will be used. By default this is set to 100000 rows, but can be changed, e.g.:

>>> import petl.transform
>>> petl.transform.defaultbuffersize = 500000

If petl.transform.defaultbuffersize is set to None, this forces all sorting to be done entirely in memory.

petl.join(left, right, key=None, presorted=False, buffersize=None)

Perform an equi-join on the given tables. E.g.:

>>> from petl import join, look    
>>> table1 = [['id', 'colour'],
...           [1, 'blue'],
...           [2, 'red'],
...           [3, 'purple']]
>>> table2 = [['id', 'shape'],
...           [1, 'circle'],
...           [3, 'square'],
...           [4, 'ellipse']]
>>> table3 = join(table1, table2, key='id')
>>> look(table3)
+------+----------+----------+
| 'id' | 'colour' | 'shape'  |
+======+==========+==========+
| 1    | 'blue'   | 'circle' |
+------+----------+----------+
| 3    | 'purple' | 'square' |
+------+----------+----------+

If no key is given, a natural join is tried, e.g.:

>>> table4 = join(table1, table2)
>>> look(table4)
+------+----------+----------+
| 'id' | 'colour' | 'shape'  |
+======+==========+==========+
| 1    | 'blue'   | 'circle' |
+------+----------+----------+
| 3    | 'purple' | 'square' |
+------+----------+----------+

Note behaviour if the key is not unique in either or both tables:

>>> table5 = [['id', 'colour'],
...           [1, 'blue'],
...           [1, 'red'],
...           [2, 'purple']]
>>> table6 = [['id', 'shape'],
...           [1, 'circle'],
...           [1, 'square'],
...           [2, 'ellipse']]
>>> table7 = join(table5, table6, key='id')
>>> look(table7)
+------+----------+-----------+
| 'id' | 'colour' | 'shape'   |
+======+==========+===========+
| 1    | 'blue'   | 'circle'  |
+------+----------+-----------+
| 1    | 'blue'   | 'square'  |
+------+----------+-----------+
| 1    | 'red'    | 'circle'  |
+------+----------+-----------+
| 1    | 'red'    | 'square'  |
+------+----------+-----------+
| 2    | 'purple' | 'ellipse' |
+------+----------+-----------+

Compound keys are supported, e.g.:

>>> table8 = [['id', 'time', 'height'],
...           [1, 1, 12.3],
...           [1, 2, 34.5],
...           [2, 1, 56.7]]
>>> table9 = [['id', 'time', 'weight'],
...           [1, 2, 4.5],
...           [2, 1, 6.7],
...           [2, 2, 8.9]]
>>> table10 = join(table8, table9, key=['id', 'time'])
>>> look(table10)
+------+--------+----------+----------+
| 'id' | 'time' | 'height' | 'weight' |
+======+========+==========+==========+
| 1    | 2      | 34.5     | 4.5      |
+------+--------+----------+----------+
| 2    | 1      | 56.7     | 6.7      |
+------+--------+----------+----------+

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.leftjoin(left, right, key=None, missing=None, presorted=False, buffersize=None)

Perform a left outer join on the given tables. E.g.:

>>> from petl import leftjoin, look
>>> table1 = [['id', 'colour'],
...           [1, 'blue'],
...           [2, 'red'],
...           [3, 'purple']]
>>> table2 = [['id', 'shape'],
...           [1, 'circle'],
...           [3, 'square'],
...           [4, 'ellipse']]
>>> table3 = leftjoin(table1, table2, key='id')
>>> look(table3)
+------+----------+----------+
| 'id' | 'colour' | 'shape'  |
+======+==========+==========+
| 1    | 'blue'   | 'circle' |
+------+----------+----------+
| 2    | 'red'    | None     |
+------+----------+----------+
| 3    | 'purple' | 'square' |
+------+----------+----------+

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.rightjoin(left, right, key=None, missing=None, presorted=False, buffersize=None)

Perform a right outer join on the given tables. E.g.:

>>> from petl import rightjoin, look
>>> table1 = [['id', 'colour'],
...           [1, 'blue'],
...           [2, 'red'],
...           [3, 'purple']]
>>> table2 = [['id', 'shape'],
...           [1, 'circle'],
...           [3, 'square'],
...           [4, 'ellipse']]
>>> table3 = rightjoin(table1, table2, key='id')
>>> look(table3)
+------+----------+-----------+
| 'id' | 'colour' | 'shape'   |
+======+==========+===========+
| 1    | 'blue'   | 'circle'  |
+------+----------+-----------+
| 3    | 'purple' | 'square'  |
+------+----------+-----------+
| 4    | None     | 'ellipse' |
+------+----------+-----------+

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.outerjoin(left, right, key=None, missing=None, presorted=False, buffersize=None)

Perform a full outer join on the given tables. E.g.:

>>> from petl import outerjoin, look
>>> table1 = [['id', 'colour'],
...           [1, 'blue'],
...           [2, 'red'],
...           [3, 'purple']]
>>> table2 = [['id', 'shape'],
...           [1, 'circle'],
...           [3, 'square'],
...           [4, 'ellipse']]
>>> table3 = outerjoin(table1, table2, key='id')
>>> look(table3)
+------+----------+-----------+
| 'id' | 'colour' | 'shape'   |
+======+==========+===========+
| 1    | 'blue'   | 'circle'  |
+------+----------+-----------+
| 2    | 'red'    | None      |
+------+----------+-----------+
| 3    | 'purple' | 'square'  |
+------+----------+-----------+
| 4    | None     | 'ellipse' |
+------+----------+-----------+

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.crossjoin(*tables)

Form the cartesian product of the given tables. E.g.:

>>> from petl import crossjoin, look
>>> table1 = [['id', 'colour'],
...           [1, 'blue'],
...           [2, 'red']]
>>> table2 = [['id', 'shape'],
...           [1, 'circle'],
...           [3, 'square']]
>>> table3 = crossjoin(table1, table2)
>>> look(table3)
+------+----------+------+----------+
| 'id' | 'colour' | 'id' | 'shape'  |
+======+==========+======+==========+
| 1    | 'blue'   | 1    | 'circle' |
+------+----------+------+----------+
| 1    | 'blue'   | 3    | 'square' |
+------+----------+------+----------+
| 2    | 'red'    | 1    | 'circle' |
+------+----------+------+----------+
| 2    | 'red'    | 3    | 'square' |
+------+----------+------+----------+
petl.antijoin(left, right, key=None, presorted=False, buffersize=None)

Return rows from the left table where the key value does not occur in the right table. E.g.:

>>> from petl import antijoin, look
>>> table1 = [['id', 'colour'],
...           [0, 'black'],
...           [1, 'blue'],
...           [2, 'red'],
...           [4, 'yellow'],
...           [5, 'white']]
>>> table2 = [['id', 'shape'],
...           [1, 'circle'],
...           [3, 'square']]
>>> table3 = antijoin(table1, table2, key='id')
>>> look(table3)
+------+----------+
| 'id' | 'colour' |
+======+==========+
| 0    | 'black'  |
+------+----------+
| 2    | 'red'    |
+------+----------+
| 4    | 'yellow' |
+------+----------+
| 5    | 'white'  |
+------+----------+

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.complement(a, b, presorted=False, buffersize=None)

Return rows in a that are not in b. E.g.:

>>> from petl import complement, look
>>> a = [['foo', 'bar', 'baz'],
...      ['A', 1, True],
...      ['C', 7, False],
...      ['B', 2, False],
...      ['C', 9, True]]
>>> b = [['x', 'y', 'z'],
...      ['B', 2, False],
...      ['A', 9, False],
...      ['B', 3, True],
...      ['C', 9, True]]
>>> aminusb = complement(a, b)
>>> look(aminusb)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'A'   | 1     | True  |
+-------+-------+-------+
| 'C'   | 7     | False |
+-------+-------+-------+

>>> bminusa = complement(b, a)
>>> look(bminusa)
+-----+-----+-------+
| 'x' | 'y' | 'z'   |
+=====+=====+=======+
| 'A' | 9   | False |
+-----+-----+-------+
| 'B' | 3   | True  |
+-----+-----+-------+

Note that the field names of each table are ignored - rows are simply compared following a lexical sort. See also the recordcomplement() function.

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.diff(a, b, presorted=False, buffersize=None)

Find the difference between rows in two tables. Returns a pair of tables, e.g.:

>>> from petl import diff, look
>>> a = [['foo', 'bar', 'baz'],
...      ['A', 1, True],
...      ['C', 7, False],
...      ['B', 2, False],
...      ['C', 9, True]]
>>> b = [['x', 'y', 'z'],
...      ['B', 2, False],
...      ['A', 9, False],
...      ['B', 3, True],
...      ['C', 9, True]]
>>> added, subtracted = diff(a, b)
>>> # rows in b not in a
... look(added)
+-----+-----+-------+
| 'x' | 'y' | 'z'   |
+=====+=====+=======+
| 'A' | 9   | False |
+-----+-----+-------+
| 'B' | 3   | True  |
+-----+-----+-------+

>>> # rows in a not in b
... look(subtracted)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'A'   | 1     | True  |
+-------+-------+-------+
| 'C'   | 7     | False |
+-------+-------+-------+

Convenient shorthand for (complement(b, a), complement(a, b)). See also complement().

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.recordcomplement(a, b, buffersize=None)

Find records in a that are not in b. E.g.:

>>> from petl import recordcomplement, look
>>> tablea = (('foo', 'bar', 'baz'),
...           ('A', 1, True),
...           ('C', 7, False),
...           ('B', 2, False),
...           ('C', 9, True))
>>> tableb = (('bar', 'foo', 'baz'),
...           (2, 'B', False),
...           (9, 'A', False),
...           (3, 'B', True),
...           (9, 'C', True))
>>> aminusb = recordcomplement(tablea, tableb)
>>> look(aminusb)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'A'   | 1     | True  |
+-------+-------+-------+
| 'C'   | 7     | False |
+-------+-------+-------+

>>> bminusa = recordcomplement(tableb, tablea)
>>> look(bminusa)
+-------+-------+-------+
| 'bar' | 'foo' | 'baz' |
+=======+=======+=======+
| 3     | 'B'   | True  |
+-------+-------+-------+
| 9     | 'A'   | False |
+-------+-------+-------+

Note that both tables must have the same set of fields, but that the order of the fields does not matter. See also the complement() function.

See also the discussion of the buffersize argument under the sort() function.

New in version 0.3.

petl.recorddiff(a, b, buffersize=None)

Find the difference between records in two tables. E.g.:

>>> from petl import recorddiff, look    
>>> tablea = (('foo', 'bar', 'baz'),
...           ('A', 1, True),
...           ('C', 7, False),
...           ('B', 2, False),
...           ('C', 9, True))
>>> tableb = (('bar', 'foo', 'baz'),
...           (2, 'B', False),
...           (9, 'A', False),
...           (3, 'B', True),
...           (9, 'C', True))
>>> added, subtracted = recorddiff(tablea, tableb)
>>> look(added)
+-------+-------+-------+
| 'bar' | 'foo' | 'baz' |
+=======+=======+=======+
| 3     | 'B'   | True  |
+-------+-------+-------+
| 9     | 'A'   | False |
+-------+-------+-------+

>>> look(subtracted)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'A'   | 1     | True  |
+-------+-------+-------+
| 'C'   | 7     | False |
+-------+-------+-------+

Convenient shorthand for (recordcomplement(b, a), recordcomplement(a, b)). See also recordcomplement().

See also the discussion of the buffersize argument under the sort() function.

New in version 0.3.

petl.intersection(a, b, presorted=False, buffersize=None)

Return rows in a that are also in b. E.g.:

>>> from petl import intersection, look
>>> table1 = (('foo', 'bar', 'baz'),
...           ('A', 1, True),
...           ('C', 7, False),
...           ('B', 2, False),
...           ('C', 9, True))
>>> table2 = (('x', 'y', 'z'),
...           ('B', 2, False),
...           ('A', 9, False),
...           ('B', 3, True),
...           ('C', 9, True))
>>> table3 = intersection(table1, table2)
>>> look(table3)
+-------+-------+-------+
| 'foo' | 'bar' | 'baz' |
+=======+=======+=======+
| 'B'   | 2     | False |
+-------+-------+-------+
| 'C'   | 9     | True  |
+-------+-------+-------+

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.aggregate(table, key, aggregators=None, failonerror=False, errorvalue=None, presorted=False, buffersize=None)

Group rows under the given key then apply aggregation functions. E.g.:

>>> from petl import aggregate, look
>>> from collections import OrderedDict
>>> table1 = [['foo', 'bar'],
...           ['a', 3],
...           ['a', 7],
...           ['b', 2],
...           ['b', 1],
...           ['b', 9],
...           ['c', 4],
...           ['d', 3],
...           ['d'],
...           ['e']]
>>> aggregators = OrderedDict()
>>> aggregators['minbar'] = 'bar', min
>>> aggregators['maxbar'] = 'bar', max
>>> aggregators['sumbar'] = 'bar', sum
>>> aggregators['listbar'] = 'bar', list
>>> table2 = aggregate(table1, 'foo', aggregators)
>>> look(table2)
+-------+----------+----------+----------+-----------+
| 'foo' | 'minbar' | 'maxbar' | 'sumbar' | 'listbar' |
+=======+==========+==========+==========+===========+
| 'a'   | 3        | 7        | 10       | [3, 7]    |
+-------+----------+----------+----------+-----------+
| 'b'   | 1        | 9        | 12       | [2, 1, 9] |
+-------+----------+----------+----------+-----------+
| 'c'   | 4        | 4        | 4        | [4]       |
+-------+----------+----------+----------+-----------+
| 'd'   | 3        | 3        | 3        | [3]       |
+-------+----------+----------+----------+-----------+
| 'e'   | None     | None     | 0        | []        |
+-------+----------+----------+----------+-----------+

Aggregation functions can also be added and/or updated using the suffix notation on the returned table object, e.g.:

>>> table3 = aggregate(table1, 'foo')
>>> table3['minbar'] = 'bar', min
>>> table3['maxbar'] = 'bar', max
>>> table3['sumbar'] = 'bar', sum
>>> table3['listbar'] = 'bar' # default aggregation is list
>>> look(table3)
+-------+----------+----------+----------+-----------+
| 'foo' | 'minbar' | 'maxbar' | 'sumbar' | 'listbar' |
+=======+==========+==========+==========+===========+
| 'a'   | 3        | 7        | 10       | [3, 7]    |
+-------+----------+----------+----------+-----------+
| 'b'   | 1        | 9        | 12       | [2, 1, 9] |
+-------+----------+----------+----------+-----------+
| 'c'   | 4        | 4        | 4        | [4]       |
+-------+----------+----------+----------+-----------+
| 'd'   | 3        | 3        | 3        | [3]       |
+-------+----------+----------+----------+-----------+
| 'e'   | None     | None     | 0        | []        |
+-------+----------+----------+----------+-----------+

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.rangeaggregate(table, key, width, aggregators=None, minv=None, maxv=None, failonerror=False, errorvalue=None, presorted=False, buffersize=None)

Group rows into bins then apply aggregation functions. E.g.:

>>> from petl import rangeaggregate, look
>>> table1 = [['foo', 'bar'],
...           ['a', 3],
...           ['a', 7],
...           ['b', 2],
...           ['b', 1],
...           ['b', 9],
...           ['c', 4],
...           ['d', 3]]
>>> table2 = rangeaggregate(table1, 'bar', width=2)
>>> table2['foocount'] = 'foo', len
>>> table2['foolist'] = 'foo' # default is list
>>> look(table2)
+---------+------------+-----------------+
| 'bar'   | 'foocount' | 'foolist'       |
+=========+============+=================+
| (1, 3)  | 2          | ['b', 'b']      |
+---------+------------+-----------------+
| (3, 5)  | 3          | ['a', 'd', 'c'] |
+---------+------------+-----------------+
| (5, 7)  | 0          | []              |
+---------+------------+-----------------+
| (7, 9)  | 1          | ['a']           |
+---------+------------+-----------------+
| (9, 11) | 1          | ['b']           |
+---------+------------+-----------------+
petl.rangecounts(table, key, width, minv=None, maxv=None, presorted=False, buffersize=None)

Group rows into bins then count the number of rows in each bin. E.g.:

>>> from petl import rangecounts, look
>>> table1 = [['foo', 'bar'],
...           ['a', 3],
...           ['a', 7],
...           ['b', 2],
...           ['b', 1],
...           ['b', 9],
...           ['c', 4],
...           ['d', 3]]
>>> table2 = rangecounts(table1, 'bar', width=2)
>>> look(table2)
+---------+---------+
| 'range' | 'count' |
+=========+=========+
| (1, 3)  | 2       |
+---------+---------+
| (3, 5)  | 3       |
+---------+---------+
| (5, 7)  | 0       |
+---------+---------+
| (7, 9)  | 1       |
+---------+---------+
| (9, 11) | 1       |
+---------+---------+
petl.rowreduce(table, key, reducer, fields=None, presorted=False, buffersize=None)

Reduce rows grouped under the given key via an arbitrary function. E.g.:

>>> from petl import rowreduce, look    
>>> table1 = [['foo', 'bar'],
...           ['a', 3],
...           ['a', 7],
...           ['b', 2],
...           ['b', 1],
...           ['b', 9],
...           ['c', 4]]
>>> def sumbar(key, rows):
...     return [key, sum([row[1] for row in rows])]
... 
>>> table2 = rowreduce(table1, key='foo', reducer=sumbar, fields=['foo', 'barsum'])
>>> look(table2)
+-------+----------+
| 'foo' | 'barsum' |
+=======+==========+
| 'a'   | 10       |
+-------+----------+
| 'b'   | 12       |
+-------+----------+
| 'c'   | 4        |
+-------+----------+

The reducer function should accept two arguments, a key and a sequence of rows, and return a single row.

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.recordreduce(table, key, reducer, fields=None, presorted=False, buffersize=None)

Reduce records grouped under the given key via an arbitrary function. E.g.:

>>> from petl import recordreduce, look    
>>> table1 = [['foo', 'bar'],
...           ['a', 3],
...           ['a', 7],
...           ['b', 2],
...           ['b', 1],
...           ['b', 9],
...           ['c', 4]]
>>> def sumbar(key, records):
...     return [key, sum([rec['bar'] for rec in records])]
... 
>>> table2 = recordreduce(table1, key='foo', reducer=sumbar, fields=['foo', 'barsum'])
>>> look(table2)
+-------+----------+
| 'foo' | 'barsum' |
+=======+==========+
| 'a'   | 10       |
+-------+----------+
| 'b'   | 12       |
+-------+----------+
| 'c'   | 4        |
+-------+----------+

The reducer function should accept two arguments, a key and a sequence of records (i.e., dictionaries of values indexed by field) and return a single row.

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.rangerowreduce(table, key, width, reducer, fields=None, minv=None, maxv=None, failonerror=False, presorted=False, buffersize=None)

Reduce rows grouped into bins under the given key via an arbitrary function. E.g.:

>>> from petl import rangerowreduce, look
>>> table1 = [['foo', 'bar'],
...           ['a', 3],
...           ['a', 7],
...           ['b', 2],
...           ['b', 1],
...           ['b', 9],
...           ['c', 4]]
>>> def redu(minv, maxunpack, rows):
...     return [minv, maxunpack, ''.join([row[0] for row in rows])]
... 
>>> table2 = rangerowreduce(table1, 'bar', 2, reducer=redu, fields=['frombar', 'tobar', 'foos'])
>>> look(table2)
+-----------+---------+--------+
| 'frombar' | 'tobar' | 'foos' |
+===========+=========+========+
| 1         | 3       | 'bb'   |
+-----------+---------+--------+
| 3         | 5       | 'ac'   |
+-----------+---------+--------+
| 5         | 7       | ''     |
+-----------+---------+--------+
| 7         | 9       | 'a'    |
+-----------+---------+--------+
| 9         | 11      | 'b'    |
+-----------+---------+--------+
petl.rangerecordreduce(table, key, width, reducer, fields=None, minv=None, maxv=None, failonerror=False, presorted=False, buffersize=None)

Reduce records grouped into bins under the given key via an arbitrary function. E.g.:

>>> from petl import rangerecordreduce, look    
>>> table1 = [['foo', 'bar'],
...           ['a', 3],
...           ['a', 7],
...           ['b', 2],
...           ['b', 1],
...           ['b', 9],
...           ['c', 4]]
>>> def redu(minv, maxunpack, recs):
...     return [minv, maxunpack, ''.join([rec['foo'] for rec in recs])]
... 
>>> table2 = rangerecordreduce(table1, 'bar', 2, reducer=redu, fields=['frombar', 'tobar', 'foos'])
>>> look(table2)
+-----------+---------+--------+
| 'frombar' | 'tobar' | 'foos' |
+===========+=========+========+
| 1         | 3       | 'bb'   |
+-----------+---------+--------+
| 3         | 5       | 'ac'   |
+-----------+---------+--------+
| 5         | 7       | ''     |
+-----------+---------+--------+
| 7         | 9       | 'a'    |
+-----------+---------+--------+
| 9         | 11      | 'b'    |
+-----------+---------+--------+
petl.mergereduce(table, key, missing=None, presorted=False, buffersize=None)

Merge rows under the given key. E.g.:

>>> from petl import mergereduce, look    
>>> table1 = [['foo', 'bar', 'baz'],
...           ['A', 1, 2.7],
...           ['B', 2, None],
...           ['D', 3, 9.4],
...           ['B', None, 7.8],
...           ['E', None],
...           ['D', 3, 12.3],
...           ['A', 2, None]]
>>> table2 = mergereduce(table1, 'foo')
>>> look(table2)
+-------+--------+-------------+
| 'foo' | 'bar'  | 'baz'       |
+=======+========+=============+
| 'A'   | (1, 2) | 2.7         |
+-------+--------+-------------+
| 'B'   | 2      | 7.8         |
+-------+--------+-------------+
| 'D'   | 3      | (9.4, 12.3) |
+-------+--------+-------------+
| 'E'   | None   |             |
+-------+--------+-------------+

Missing values are overridden by non-missing values. Conflicting values are reported as a tuple.

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

Changed in version 0.3.

Previously conflicts were reported as a list, this is changed to a tuple in version 0.3.

petl.merge(*tables, **kwargs)

Convenience function to concatenate multiple tables (via cat()) then reduce rows by merging under the given key (via mergereduce()). E.g.:

>>> from petl import look, merge
>>> table1 = [['foo', 'bar', 'baz'],
...           [1, 'A', True],
...           [2, 'B', None],
...           [4, 'C', True]]
>>> table2 = [['bar', 'baz', 'quux'],
...           ['A', True, 42.0],
...           ['B', False, 79.3],
...           ['C', False, 12.4]]
>>> table3 = merge(table1, table2, key='bar')
>>> look(table3)
+-------+-------+---------------+--------+
| 'foo' | 'bar' | 'baz'         | 'quux' |
+=======+=======+===============+========+
| 1     | 'A'   | True          | 42.0   |
+-------+-------+---------------+--------+
| 2     | 'B'   | False         | 79.3   |
+-------+-------+---------------+--------+
| 4     | 'C'   | [False, True] | 12.4   |
+-------+-------+---------------+--------+

If presorted is True, it is assumed that the data are already sorted by the given key, and the buffersize argument is ignored. Otherwise, the data are sorted, see also the discussion of the buffersize argument under the sort() function.

petl.melt(table, key=None, variables=None, variablefield='variable', valuefield='value')

Reshape a table, melting fields into data. E.g.:

>>> from petl import melt, look
>>> table1 = [['id', 'gender', 'age'],
...           [1, 'F', 12],
...           [2, 'M', 17],
...           [3, 'M', 16]]
>>> table2 = melt(table1, 'id')
>>> look(table2)
+------+------------+---------+
| 'id' | 'variable' | 'value' |
+======+============+=========+
| 1    | 'gender'   | 'F'     |
+------+------------+---------+
| 1    | 'age'      | 12      |
+------+------------+---------+
| 2    | 'gender'   | 'M'     |
+------+------------+---------+
| 2    | 'age'      | 17      |
+------+------------+---------+
| 3    | 'gender'   | 'M'     |
+------+------------+---------+
| 3    | 'age'      | 16      |
+------+------------+---------+

Compound keys are supported, e.g.:

>>> table3 = [['id', 'time', 'height', 'weight'],
...           [1, 11, 66.4, 12.2],
...           [2, 16, 53.2, 17.3],
...           [3, 12, 34.5, 9.4]]
>>> table4 = melt(table3, key=['id', 'time'])
>>> look(table4)
+------+--------+------------+---------+
| 'id' | 'time' | 'variable' | 'value' |
+======+========+============+=========+
| 1    | 11     | 'height'   | 66.4    |
+------+--------+------------+---------+
| 1    | 11     | 'weight'   | 12.2    |
+------+--------+------------+---------+
| 2    | 16     | 'height'   | 53.2    |
+------+--------+------------+---------+
| 2    | 16     | 'weight'   | 17.3    |
+------+--------+------------+---------+
| 3    | 12     | 'height'   | 34.5    |
+------+--------+------------+---------+
| 3    | 12     | 'weight'   | 9.4     |
+------+--------+------------+---------+

A subset of variable fields can be selected, e.g.:

>>> table5 = melt(table3, key=['id', 'time'], variables=['height'])    
>>> look(table5)
+------+--------+------------+---------+
| 'id' | 'time' | 'variable' | 'value' |
+======+========+============+=========+
| 1    | 11     | 'height'   | 66.4    |
+------+--------+------------+---------+
| 2    | 16     | 'height'   | 53.2    |
+------+--------+------------+---------+
| 3    | 12     | 'height'   | 34.5    |
+------+--------+------------+---------+
petl.recast(table, key=None, variablefield='variable', valuefield='value', samplesize=1000, reducers=None, missing=None)

Recast molten data. E.g.:

>>> from petl import recast, look
>>> table1 = [['id', 'variable', 'value'],
...           [3, 'age', 16],
...           [1, 'gender', 'F'],
...           [2, 'gender', 'M'],
...           [2, 'age', 17],
...           [1, 'age', 12],
...           [3, 'gender', 'M']]
>>> table2 = recast(table1)
>>> look(table2)
+------+-------+----------+
| 'id' | 'age' | 'gender' |
+======+=======+==========+
| 1    | 12    | 'F'      |
+------+-------+----------+
| 2    | 17    | 'M'      |
+------+-------+----------+
| 3    | 16    | 'M'      |
+------+-------+----------+

If variable and value fields are different from the defaults, e.g.:

>>> table3 = [['id', 'vars', 'vals'],
...           [3, 'age', 16],
...           [1, 'gender', 'F'],
...           [2, 'gender', 'M'],
...           [2, 'age', 17],
...           [1, 'age', 12],
...           [3, 'gender', 'M']]
>>> table4 = recast(table3, variablefield='vars', valuefield='vals')
>>> look(table4)
+------+-------+----------+
| 'id' | 'age' | 'gender' |
+======+=======+==========+
| 1    | 12    | 'F'      |
+------+-------+----------+
| 2    | 17    | 'M'      |
+------+-------+----------+
| 3    | 16    | 'M'      |
+------+-------+----------+

If there are multiple values for each key/variable pair, and no reducers function is provided, then all values will be listed. E.g.:

>>> table6 = [['id', 'time', 'variable', 'value'],
...           [1, 11, 'weight', 66.4],
...           [1, 14, 'weight', 55.2],
...           [2, 12, 'weight', 53.2],
...           [2, 16, 'weight', 43.3],
...           [3, 12, 'weight', 34.5],
...           [3, 17, 'weight', 49.4]]
>>> table7 = recast(table6, key='id')
>>> look(table7)
+------+--------------+
| 'id' | 'weight'     |
+======+==============+
| 1    | [66.4, 55.2] |
+------+--------------+
| 2    | [53.2, 43.3] |
+------+--------------+
| 3    | [34.5, 49.4] |
+------+--------------+

Multiple values can be reduced via an aggregation function, e.g.:

>>> def mean(values):
...     return float(sum(values)) / len(values)
... 
>>> table8 = recast(table6, key='id', reducers={'weight': mean})
>>> look(table8)    
+------+--------------------+
| 'id' | 'weight'           |
+======+====================+
| 1    | 60.800000000000004 |
+------+--------------------+
| 2    | 48.25              |
+------+--------------------+
| 3    | 41.95              |
+------+--------------------+

Missing values are padded with whatever is provided via the missing keyword argument (None by default), e.g.:

>>> table9 = [['id', 'variable', 'value'],
...           [1, 'gender', 'F'],
...           [2, 'age', 17],
...           [1, 'age', 12],
...           [3, 'gender', 'M']]
>>> table10 = recast(table9, key='id')
>>> look(table10)
+------+-------+----------+
| 'id' | 'age' | 'gender' |
+======+=======+==========+
| 1    | 12    | 'F'      |
+------+-------+----------+
| 2    | 17    | None     |
+------+-------+----------+
| 3    | None  | 'M'      |
+------+-------+----------+

Note that the table is scanned once to discover variables, then a second time to reshape the data and recast variables as fields. How many rows are scanned in the first pass is determined by the samplesize argument.

petl.transpose(table)

Transpose rows into columns. E.g.:

>>> from petl import transpose, look    
>>> table1 = (('id', 'colour'),
...           (1, 'blue'),
...           (2, 'red'),
...           (3, 'purple'),
...           (5, 'yellow'),
...           (7, 'orange'))
>>> table2 = transpose(table1)
>>> look(table2)
+----------+--------+-------+----------+----------+----------+
| 'id'     | 1      | 2     | 3        | 5        | 7        |
+==========+========+=======+==========+==========+==========+
| 'colour' | 'blue' | 'red' | 'purple' | 'yellow' | 'orange' |
+----------+--------+-------+----------+----------+----------+
petl.pivot(table, f1, f2, f3, aggfun, presorted=False, buffersize=None, missing=None)

Construct a pivot table. E.g.:

>>> from petl import pivot, look
>>> table1 = (('region', 'gender', 'style', 'units'),
...           ('east', 'boy', 'tee', 12),
...           ('east', 'boy', 'golf', 14),
...           ('east', 'boy', 'fancy', 7),
...           ('east', 'girl', 'tee', 3),
...           ('east', 'girl', 'golf', 8),
...           ('east', 'girl', 'fancy', 18),
...           ('west', 'boy', 'tee', 12),
...           ('west', 'boy', 'golf', 15),
...           ('west', 'boy', 'fancy', 8),
...           ('west', 'girl', 'tee', 6),
...           ('west', 'girl', 'golf', 16),
...           ('west', 'girl', 'fancy', 1))
>>> table2 = pivot(table1, 'region', 'gender', 'units', sum)
>>> look(table2)
+----------+-------+--------+
| 'region' | 'boy' | 'girl' |
+==========+=======+========+
| 'east'   | 33    | 29     |
+----------+-------+--------+
| 'west'   | 35    | 23     |
+----------+-------+--------+

>>> table3 = pivot(table1, 'region', 'style', 'units', sum)
>>> look(table3)
+----------+---------+--------+-------+
| 'region' | 'fancy' | 'golf' | 'tee' |
+==========+=========+========+=======+
| 'east'   | 25      | 22     | 15    |
+----------+---------+--------+-------+
| 'west'   | 9       | 31     | 18    |
+----------+---------+--------+-------+

>>> table4 = pivot(table1, 'gender', 'style', 'units', sum)
>>> look(table4)
+----------+---------+--------+-------+
| 'gender' | 'fancy' | 'golf' | 'tee' |
+==========+=========+========+=======+
| 'boy'    | 15      | 29     | 24    |
+----------+---------+--------+-------+
| 'girl'   | 19      | 24     | 9     |
+----------+---------+--------+-------+

See also recast().

Loading (writing) tables to files and databases

petl.tocsv(table, filename, **kwargs)

Write the table to a CSV file. E.g.:

>>> from petl import tocsv
>>> table = [['foo', 'bar'],
...          ['a', 1],
...          ['b', 2],
...          ['c', 2]]
>>> tocsv(table, 'test.csv', delimiter='\t')
>>> # look what it did
... from petl import look, fromcsv
>>> look(fromcsv('test.csv', delimiter='\t'))
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | '1'   |
+-------+-------+
| 'b'   | '2'   |
+-------+-------+
| 'c'   | '2'   |
+-------+-------+

Note that if a file already exists at the given location, it will be overwritten.

petl.appendcsv(table, filename, **kwargs)

Append data rows to an existing CSV file. E.g.:

>>> # look at an existing CSV file
... from petl import look, fromcsv
>>> testcsv = fromcsv('test.csv', delimiter='\t')
>>> look(testcsv)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | '1'   |
+-------+-------+
| 'b'   | '2'   |
+-------+-------+
| 'c'   | '2'   |
+-------+-------+

>>> # append some data
... from petl import appendcsv 
>>> table = [['foo', 'bar'],
...          ['d', 7],
...          ['e', 42],
...          ['f', 12]]
>>> appendcsv(table, 'test.csv', delimiter='\t')
>>> # look what it did
... look(testcsv)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | '1'   |
+-------+-------+
| 'b'   | '2'   |
+-------+-------+
| 'c'   | '2'   |
+-------+-------+
| 'd'   | '7'   |
+-------+-------+
| 'e'   | '42'  |
+-------+-------+
| 'f'   | '12'  |
+-------+-------+

Note that no attempt is made to check that the fields or row lengths are consistent with the existing data, the data rows from the table are simply appended to the file. See also the cat() function.

petl.topickle(table, filename, protocol=-1)

Write the table to a pickle file. E.g.:

>>> from petl import topickle
>>> table = [['foo', 'bar'],
...          ['a', 1],
...          ['b', 2],
...          ['c', 2]]
>>> topickle(table, 'test.dat')
>>> # look what it did
... from petl import look, frompickle
>>> look(frompickle('test.dat'))
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 1     |
+-------+-------+
| 'b'   | 2     |
+-------+-------+
| 'c'   | 2     |
+-------+-------+

Note that if a file already exists at the given location, it will be overwritten.

The pickle file format preserves type information, i.e., reading and writing is round-trippable.

petl.appendpickle(table, filename, protocol=-1)

Append data to an existing pickle file. E.g.:

>>> # inspect an existing pickle file
... from petl import look, frompickle
>>> testdat = frompickle('test.dat')
>>> look(testdat)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 1     |
+-------+-------+
| 'b'   | 2     |
+-------+-------+
| 'c'   | 2     |
+-------+-------+

>>> # append some data
... from petl import appendpickle
>>> table = [['foo', 'bar'],
...          ['d', 7],
...          ['e', 42],
...          ['f', 12]]
>>> appendpickle(table, 'test.dat')
>>> # look what it did
... look(testdat)
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| 'a'   | 1     |
+-------+-------+
| 'b'   | 2     |
+-------+-------+
| 'c'   | 2     |
+-------+-------+
| 'd'   | 7     |
+-------+-------+
| 'e'   | 42    |
+-------+-------+
| 'f'   | 12    |
+-------+-------+

Note that no attempt is made to check that the fields or row lengths are consistent with the existing data, the data rows from the table are simply appended to the file. See also the cat() function.

petl.tosqlite3(table, filename, tablename, create=True)

Load data into a table in an sqlite3 database. Note that if the database table exists, it will be truncated, i.e., all existing rows will be deleted prior to inserting the new data. E.g.:

>>> table = [['foo', 'bar'],
...          ['a', 1],
...          ['b', 2],
...          ['c', 2]]
>>> from petl import tosqlite3
>>> # by default, if the table does not already exist, it will be created
... tosqlite3(table, 'test.db', 'foobar')
>>> # look what it did
... from petl import look, fromsqlite3
>>> look(fromsqlite3('test.db', 'select * from foobar'))
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| u'a'  | 1     |
+-------+-------+
| u'b'  | 2     |
+-------+-------+
| u'c'  | 2     |
+-------+-------+
petl.appendsqlite3(table, filename, tablename)

Load data into an existing table in an sqlite3 database. Note that the database table will be appended, i.e., the new data will be inserted into the table, and any existing rows will remain. E.g.:

>>> moredata = [['foo', 'bar'],
...             ['d', 7],
...             ['e', 9],
...             ['f', 1]]
>>> from petl import appendsqlite3
>>> appendsqlite3(moredata, 'test.db', 'foobar') 
>>> # look what it did
... from petl import look, fromsqlite3
>>> look(fromsqlite3('test.db', 'select * from foobar'))
+-------+-------+
| 'foo' | 'bar' |
+=======+=======+
| u'a'  | 1     |
+-------+-------+
| u'b'  | 2     |
+-------+-------+
| u'c'  | 2     |
+-------+-------+
| u'd'  | 7     |
+-------+-------+
| u'e'  | 9     |
+-------+-------+
| u'f'  | 1     |
+-------+-------+
petl.todb(table, connection, tablename, commit=True)

Load data into an existing database table via a DB-API 2.0 connection. Note that the database table will be truncated, i.e., all existing rows will be deleted prior to inserting the new data.

E.g., using sqlite3:

>>> import sqlite3
>>> connection = sqlite3.connect('test.db')
>>> table = [['foo', 'bar'],
...          ['a', 1],
...          ['b', 2],
...          ['c', 2]]
>>> from petl import todb
>>> # assuming table "foobar" already exists in the database
... todb(table, connection, 'foobar')    

E.g., using psycopg2:

>>> import psycopg2 
>>> connection = psycopg2.connect("dbname=test user=postgres")
>>> table = [['foo', 'bar'],
...          ['a', 1],
...          ['b', 2],
...          ['c', 2]]
>>> from petl import todb
>>> # assuming table "foobar" already exists in the database
... todb(table, connection, 'foobar')    

E.g., using MySQLdb:

>>> import MySQLdb
>>> connection = MySQLdb.connect(passwd="moonpie", db="thangs")
>>> table = [['foo', 'bar'],
...          ['a', 1],
...          ['b', 2],
...          ['c', 2]]
>>> from petl import todb
>>> # assuming table "foobar" already exists in the database
... todb(table, connection, 'foobar')    
petl.appenddb(table, connection, tablename, commit=True)

Load data into an existing database table via a DB-API 2.0 connection. Note that the database table will be appended, i.e., the new data will be inserted into the table, and any existing rows will remain.

E.g., using sqlite3:

>>> import sqlite3
>>> connection = sqlite3.connect('test.db')
>>> table = [['foo', 'bar'],
...          ['a', 1],
...          ['b', 2],
...          ['c', 2]]
>>> from petl import appenddb
>>> # assuming table "foobar" already exists in the database
... appenddb(table, connection, 'foobar')    

E.g., using psycopg2:

>>> import psycopg2 
>>> connection = psycopg2.connect("dbname=test user=postgres")
>>> table = [['foo', 'bar'],
...          ['a', 1],
...          ['b', 2],
...          ['c', 2]]
>>> from petl import appenddb
>>> # assuming table "foobar" already exists in the database
... appenddb(table, connection, 'foobar')    

E.g., using MySQLdb:

>>> import MySQLdb
>>> connection = MySQLdb.connect(passwd="moonpie", db="thangs")
>>> table = [['foo', 'bar'],
...          ['a', 1],
...          ['b', 2],
...          ['c', 2]]
>>> from petl import appenddb
>>> # assuming table "foobar" already exists in the database
... appenddb(table, connection, 'foobar')    
petl.totext(table, filename, template, prologue=None, epilogue=None)

Write the table to a text file. E.g.:

>>> from petl import totext    
>>> table = [['foo', 'bar'],
...          ['a', 1],
...          ['b', 2],
...          ['c', 2]]
>>> prologue = """{| class="wikitable"
... |-
... ! foo
... ! bar
... """
>>> template = """|-
... | {foo}
... | {bar}
... """
>>> epilogue = "|}"
>>> totext(table, 'test.txt', template, prologue, epilogue)
>>> 
>>> # see what we did
... with open('test.txt') as f:
...     print f.read()
...     
{| class="wikitable"
|-
! foo
! bar
|-
| a
| 1
|-
| b
| 2
|-
| c
| 2
|}

The template will be used to format each row via str.format.

petl.appendtext(table, filename, template, prologue=None, epilogue=None)

As totext() but the file is opened in append mode.

Indices and tables