Module parallel_manager¶
-
class
pysph.parallel.parallel_manager.
ParallelManager
(int dim, list particles, comm, double radius_scale=2.0, int ghost_layers=2, domain=None, bool update_cell_sizes=True)¶ Bases:
object
Base class for all parallel managers.
Constructor.
Parameters: - dim (int) – Dimension
- particles (list) – list of particle arrays to be managed.
- comm (mpi4py.MPI.COMM, default) – MPI communicator for parallel invocations
- radius_scale (double, default (2)) – Optional kernel radius scale. Defaults to 2
- ghost_layers (int, default (2)) – Optional factor for computing bounding boxes for cells.
- domain (DomainManager, default (None)) – Optional limits for the domain
-
Mh
¶ Mh: ‘double’
-
Mx
¶ Mx: ‘double’
-
My
¶ My: ‘double’
-
Mz
¶ Mz: ‘double’
-
cell_list
¶ cell_list: list
-
cell_map
¶ cell_map: dict
-
cell_size
¶ cell_size: ‘double’
-
comm
¶ comm: object
-
compute_cell_size
(self)¶ Compute the cell size for the binning.
The cell size is chosen as the kernel radius scale times the maximum smoothing length in the local processor. For parallel runs, we would need to communicate the maximum ‘h’ on all processors to decide on the appropriate binning size.
-
compute_remote_particles
(self, pa_index)¶
-
create_particle_lists
(self, pa_index)¶
-
dt_recvbuf
¶ dt_recvbuf: numpy.ndarray
-
dt_sendbuf
¶ dt_sendbuf: numpy.ndarray
-
exportCellGlobalids
¶ exportCellGlobalids: pyzoltan.core.carray.UIntArray
-
exportCellLocalids
¶ exportCellLocalids: pyzoltan.core.carray.UIntArray
-
exportCellProcs
¶ exportCellProcs: pyzoltan.core.carray.IntArray
-
get_nearest_particles
(self, int src_index, int dst_index, size_t d_idx, UIntArray nbrs)¶ Utility function to get near-neighbors for a particle.
Parameters: - src_index (int) – Index of the source particle array in the particles list
- dst_index (int) – Index of the destination particle array in the particles list
- d_idx (int (input)) – Particle for which neighbors are sought.
- nbrs (UIntArray (output)) – Neighbors for the requested particle are stored here.
-
ghost_layers
¶ ghost_layers: ‘int’
-
importCellGlobalids
¶ importCellGlobalids: pyzoltan.core.carray.UIntArray
-
importCellLocalids
¶ importCellLocalids: pyzoltan.core.carray.UIntArray
-
importCellProcs
¶ importCellProcs: pyzoltan.core.carray.IntArray
-
initial_update
¶ initial_update: ‘bool’
-
lb_count
¶ lb_count: ‘int’
-
lb_freq
¶ lb_freq: ‘int’
-
load_balance
(self)¶
-
local_bin
(self)¶ Create the local cell map.
Bin the particles by deleting any previous cells and re-computing the indexing structure. This corresponds to a local binning process that is called when each processor has a given list of particles to deal with.
-
migrate_partition
(self)¶
-
mx
¶ mx: ‘double’
-
my
¶ my: ‘double’
-
mz
¶ mz: ‘double’
-
ncells_local
¶ ncells_local: ‘int’
-
ncells_remote
¶ ncells_remote: ‘int’
-
ncells_total
¶ ncells_total: ‘int’
-
numCellExport
¶ numCellExport: ‘int’
-
numCellImport
¶ numCellImport: ‘int’
-
num_global
¶ num_global: list
-
num_local
¶ num_local: list
-
num_remote
¶ num_remote: list
-
pa_exchanges
¶ pa_exchanges: list
-
pa_wrappers
¶ pa_wrappers: list
-
particles
¶ particles: list
-
radius_scale
¶ radius_scale: ‘double’
-
rank
¶ rank: ‘int’
-
remove_remote_particles
(self)¶ Remove remote particles
-
save_partition
(self, fname, count=0)¶ Collect cell data from processors and save
-
set_data
(self)¶ Compute the user defined data for use with Zoltan
-
set_lb_freq
(self, int lb_freq)¶
-
size
¶ size: ‘int’
-
update
(self)¶
-
update_cell_gids
(self)¶ Update global indices for the cell_map dictionary.
The objects to be partitioned in this class are the cells and we need to number them uniquely across processors. The numbering is done sequentially starting from the processor with rank 0 to the processor with rank size-1
-
update_cell_sizes
¶ update_cell_sizes: ‘bool’
-
update_local_data
(self)¶ Update the cell map after load balance.
After the load balancing step, each processor has a new set of local particles which need to be indexed. This new cell structure is then used to compute remote particles.
-
update_particle_gids
(self)¶ Update individual particle global indices
-
update_partition
(self)¶ Update the partition.
This is the main entry point for the parallel manager. Given particles distributed across processors, we bin them, assign uniqe global indices for the cells and particles and subsequently perform the following steps:
- Call a load balancing function to get import and export lists for the cell indices.
for each particle array:
- Create particle import/export lists using the cell import/export lists
- Call ParticleArrayExchange’s lb_exchange data with these particle import/export lists to effect the data movement for particles.
- Update the local cell map after the data has been exchanged.
now that we have the updated map, for each particle array:
- Compute remote particle import/export lists from the cell map
- Call ParticleArrayExchange’s remote_exchange_data with these lists to effect the data movement.
now that the data movement is done,
- Update the local cell map to accomodate remote particles.
Notes
Although not intended to be a ‘parallel’ NNPS, the ParallelManager can be used for this purpose. I don’t think this is advisable in variable smoothing length simulations where we should be using a very coarse grained partitioning to ensure safe local computations.
For two step integrators commonly employed in SPH and with a sufficient number of ghost layers for remote particles, we should call ‘update’ only at the end of a time step. The idea of multiple ghost layers is to ensure that local particles have a sufficiently large halo region around them.
-
update_remote_data
(self)¶ Update the cell structure after sharing remote particles.
After exchanging remote particles, we need to index the new remote particles for a possible neighbor query.
-
update_time_steps
(self, double local_dt)¶ Peform a reduction to compute the globally stable time steps
-
class
pysph.parallel.parallel_manager.
ParticleArrayExchange
(int pa_index, ParticleArray pa, comm)¶ Bases:
object
-
align_particles
(self)¶
-
comm
¶ comm: object
-
data_tag_lb
¶ data_tag_lb: ‘int’
-
data_tag_remote
¶ data_tag_remote: ‘int’
-
exportParticleGlobalids
¶ exportParticleGlobalids: pyzoltan.core.carray.UIntArray
-
exportParticleLocalids
¶ exportParticleLocalids: pyzoltan.core.carray.UIntArray
-
exportParticleProcs
¶ exportParticleProcs: pyzoltan.core.carray.IntArray
-
extend
(self, int currentsize, int newsize)¶
-
get_sendbufs
(self, UIntArray exportIndices)¶
-
importParticleGlobalids
¶ importParticleGlobalids: pyzoltan.core.carray.UIntArray
-
importParticleLocalids
¶ importParticleLocalids: pyzoltan.core.carray.UIntArray
-
importParticleProcs
¶ importParticleProcs: pyzoltan.core.carray.IntArray
-
lb_exchange
¶ lb_exchange: ‘bool’
-
lb_exchange_data
(self)¶ Share particle info after Zoltan_LB_Balance
After an initial call to ‘Zoltan_LB_Balance’, the new size of the arrays should be (num_particles - numExport + numImport) to reflect particles that should be imported and exported.
This function should be called after the load balancing lists a defined. The particles to be exported are removed and the arrays re-sized. MPI is then used to send and receive the data
-
lb_props
¶ lb_props: list
-
msglength_tag_lb
¶ msglength_tag_lb: ‘int’
-
msglength_tag_remote
¶ msglength_tag_remote: ‘int’
-
nprops
¶ nprops: ‘int’
-
numParticleExport
¶ numParticleExport: ‘int’
-
numParticleImport
¶ numParticleImport: ‘int’
-
num_ghost
¶ num_ghost: ‘size_t’
-
num_global
¶ num_global: ‘size_t’
-
num_local
¶ num_local: ‘size_t’
-
num_remote
¶ num_remote: ‘size_t’
-
pa
¶ pa: pysph.base.particle_array.ParticleArray
-
pa_index
¶ pa_index: ‘int’
-
pa_wrapper
¶ pa_wrapper: pysph.base.nnps_base.NNPSParticleArrayWrapper
-
rank
¶ rank: ‘int’
-
remote_exchange
¶ remote_exchange: ‘bool’
-
remote_exchange_data
(self)¶ Share particle info after computing remote particles.
After a load balancing and the corresponding exhange of particles, additional particles are needed as remote (remote). Calls to ‘compute_remote_particles’ and ‘Zoltan_Invert_Lists’ builds the lists required.
The arrays must now be re-sized to (num_particles + numImport) to reflect the new particles that come in as remote particles.
-
remove_remote_particles
(self)¶
-
reset_lists
(self)¶ Reset the particle lists
-
set_tag
(self, int start, int end, int value)¶ Reset the annoying tag value after particles are resized.
-
size
¶ size: ‘int’
-
update_particle_gids
(self)¶ Update the global indices.
We call a utility function to get the new number of particles across the processors and then linearly assign indices to the objects.
-
-
class
pysph.parallel.parallel_manager.
ZoltanParallelManager
(int dim, list particles, comm, double radius_scale=2.0, int ghost_layers=2, domain=None, bool update_cell_sizes=True)¶ Bases:
pysph.parallel.parallel_manager.ParallelManager
Base class for Zoltan enabled parallel cell managers.
To partition a list of arrays, we do an NNPS like box sort on all arrays to create a global spatial indexing structure. The cell_map are then used as ‘objects’ to be partitioned by Zoltan. The cell_map dictionary (cell-map) need not be unique across processors. We are responsible for assignning unique global ids for the cells.
The Zoltan generated (cell) import/export lists are then used to construct particle import/export lists which are used to perform the data movement of the particles. A ParticleArrayExchange object (lb_exchange_data and remote_exchange_data) is used for each particle array in turn to effect this movement.
Constructor.
Parameters: - dim (int) – Dimension
- particles (list) – list of particle arrays to be managed.
- comm (mpi4py.MPI.COMM, default) – MPI communicator for parallel invocations
- radius_scale (double, default (2)) – Optional kernel radius scale. Defaults to 2
- ghost_layers (int, default (2)) – Optional factor for computing bounding boxes for cells.
- domain (DomainManager, default (None)) – Optional limits for the domain
-
changes
¶ changes: ‘int’
-
compute_remote_particles
(self)¶ Compute remote particles.
Particles to be exported are determined by flagging individual cells and where they need to be shared to meet neighbor requirements.
-
create_particle_lists
(self)¶ Create particle export/import lists
For the cell based partitioner, Zoltan generated import and export lists apply to the cells. From this data, we can generate local export indices for the particles which must then be inverted to get the requisite import lists.
We first construct the particle export lists in local arrays using the information from Zoltan_LB_Balance and subsequently call Zoltan_Invert_Lists to get the import lists. These arrays are then copied to the ParticleArrayExchange import/export lists.
-
load_balance
(self)¶ Use Zoltan to generate import/export lists for the cells.
For the Zoltan interface, we require to populate a user defined struct with appropriate data for Zoltan to deal with. For the geometric based partitioners for example, we require the unique cell global indices and arrays for the cell centroid coordinates. Computation of this data must be done prior to calling ‘pz.Zoltan_LB_Balance’ through a call to ‘set_data’
-
migrate_particles
(self)¶
-
pz
¶ pz: pyzoltan.core.zoltan.PyZoltan
-
class
pysph.parallel.parallel_manager.
ZoltanParallelManagerGeometric
(int dim, list particles, comm, double radius_scale=2.0, int ghost_layers=2, domain=None, str lb_method='RCB', str keep_cuts='1', str obj_weight_dim='1', bool update_cell_sizes=True)¶ Bases:
pysph.parallel.parallel_manager.ZoltanParallelManager
Zoltan enabled parallel manager for use with geometric load balancing.
Use this class for the Zoltan RCB, RIB and HSFC load balancing algorithms.
-
migrate_particles
(self)¶ Update an existing partition
-
set_data
(self)¶ Set the user defined particle data structure for Zoltan.
For the geometric based partitioners, Zoltan requires as input the number of local/global objects, unique global indices for the local objects and coordinate values for the local objects.
-
set_zoltan_rcb_directions
(self, value)¶ Option to group the cuts along a given direction
Legal values (refer to the Zoltan User Guide):
‘0’ = don’t order cuts; ‘1’ = xyz ‘2’ = xzy ‘3’ = yzx ‘4’ = yxz ‘5’ = zxy ‘6’ = zyx
-
set_zoltan_rcb_lock_directions
(self)¶ Set the Zoltan flag to fix the directions of the RCB cuts
-
set_zoltan_rcb_rectilinear_blocks
(self)¶ Flag to control the shape of the RCB regions
-
set_zoltan_rcb_reuse
(self)¶ Use previous cuts as guesses for the current RCB cut
-