langacore.kit.common 0.2.4 documentation

This Page

langacore.kit.cache.memoization

langacore.kit.cache.memoization

Implements a reusable memoization decorator. It is using a finite-size cache with pickled arguments as keys, to hold the outcome of a specific function call. When the decorated function is called again with the same arguments, the outcome is fetched from the cache instead of being recalculated again.

The cache used maintains a list of Least Recently Used keys so that in case of overflow only the seemingly least important ones get deleted.

Note

Instead of importing the whole structure, a recommended shortcut is available. Use from langacore.kit.cache import memoize.

Functions

memoize(func=None, update_interval=300, max_size=256, skip_first=False, fast_updates=True)

Memoization decorator.

Parameters:
  • update_interval – time in seconds after which the actual function will be called again
  • max_size – maximum buffer count for distinct memoize hashes for the function. Can be set to 0 or None. Be aware of the possibly inordinate memory usage in that case
  • skip_firstFalse by default; if True, the first argument to the actual function won’t be added to the memoize hash
  • fast_updates – if True (the default), an optimized LRU algorithm is used where all function invocations except every Nth (where N ==sys.maxint) are much faster but cache overflow is costly. In general, having fast_updates set to True gives a 15% performance boost when there are no cache misses (the possible number of used argument combinations for the decorated function is smaller than the value of max_size). If cache misses exceed 50%, you might want to increase max_size. If that’s not feasible, memoization with fast_updates set to False will perform faster.