| |
- bgread(stream, blockSizeLimit=65535, pollTime=0.03, closeStream=True)
- bgread - Start a thread which will read from the given stream in a non-blocking fashion, and automatically populate data in the returned object.
@param stream <object> - A stream on which to read. Socket, file, etc.
@param blockSizeLimit <None/int> - Number of bytes. Default 65535.
If None, the stream will be read from until there is no more available data (not closed, but you've read all that's been flushed to straem). This is okay for smaller datasets, but this number effectively controls the amount of CPU time spent in I/O on this stream VS everything else in your application. The default of 65535 bytes is a fair amount of data.
@param pollTime <float> - Default .03 (30ms) After all available data has been read from the stream, wait this many seconds before checking again for more data.
A low number here means a high priority, i.e. more cycles will be devoted to checking and collecting the background data. Since this is a non-blocking read, this value is the "block", which will return execution context to the remainder of the application. The default of 100ms should be fine in most cases. If it's really idle data collection, you may want to try a value of 1 second.
@param closeStream <bool> - Default True. If True, the "close" method on the stream object will be called when the other side has closed and all data has been read.
NOTES --
blockSizeLimit / pollTime is your effective max-throughput. Real throughput will be lower than this number, as the actual throughput is be defined by:
T = (blockSizeLimit / pollTime) - DeviceReadTime(blockSizeLimit)
Using the defaults of .03 and 65535 means you'll read up to 2 MB per second. Keep in mind that the more time spent in I/O means less time spent doing other tasks.
@return - The return of this function is a BackgroundReadData object. This object contains an attribute "blocks" which is a list of the non-zero-length blocks that were read from the stream. The object also contains a calculated property, "data", which is a string/bytes (depending on stream mode) of all the data currently read. The property "isFinished" will be set to True when the stream has been closed. The property "error" will be set to any exception that occurs during reading which will terminate the thread. @see BackgroundReadData for more info.
- bgwrite(fileObj, data, closeWhenFinished=False, chainAfter=None, ioPrio=4)
- bgwrite - Start a background writing process
@param fileObj <stream> - A stream backed by an fd
@param data <str/bytes/list> - The data to write. If a list is given, each successive element will be written to the fileObj and flushed. If a string/bytes is provided, it will be chunked according to the #BackgroundIOPriority chosen. If you would like a different chunking than the chosen ioPrio provides, use #bgwrite_chunk function instead.
Chunking makes the data available quicker on the other side, reduces iowait on this side, and thus increases interactivity (at penalty of throughput).
@param closeWhenFinished <bool> - If True, the given fileObj will be closed after all the data has been written. Default False.
@param chainAfter <None/BackgroundWriteProcess> - If a BackgroundWriteProcess object is provided (the return of bgwrite* functions), this data will be held for writing until the data associated with the provided object has completed writing.
Use this to queue several background writes, but retain order within the resulting stream.
@return - BackgroundWriteProcess - An object representing the state of this operation. @see BackgroundWriteProcess
- bgwrite_chunk(fileObj, data, chunkSize, closeWhenFinished=False, chainAfter=None, ioPrio=4)
- bgwrite_chunk - Chunk up the data into even #chunkSize blocks, and then pass it onto #bgwrite.
Use this to break up a block of data into smaller segments that can be written and flushed.
The smaller the chunks, the more interactive (recipient gets data quicker, iowait goes down for you) at cost of throughput.
bgwrite will automatically chunk according to the given ioPrio, but you can use this for finer-tuned control.
@see bgwrite
@param data <string/bytes> - The data to chunk up
@param chunkSize <integer> - The max siZe of each chunk.
- nonblock_read(stream, limit=None, forceMode=None)
- nonblock_read - Read any data available on the given stream (file, socket, etc) without blocking and regardless of newlines.
@param stream <object> - A stream (like a file object or a socket)
@param limit <None/int> - Max number of bytes to read. If None or 0, will read as much data is available.
@param forceMode <None/mode string> - Default None. Will be autodetected if None. If you want to explicitly force a mode, provide 'b' for binary (bytes) or 't' for text (Str). This determines the return type.
@return <str or bytes depending on stream's mode> - Any data available on the stream, or "None" if the stream was closed on the other side and all data has already been read.
|