Power Grid Investment Module (PowerGIM)¶
Created on Tue Aug 16 13:21:21 2016
@author: Martin Kristiansen, Harald Svendsen
-
class
powergama.powergim.
SipModel
(M_const=1000, CO2price=False)¶ Power Grid Investment Module - stochastic investment problem
Methods
-
computeAreaCostBranch
(model, c, include_om=False)¶ Investment cost of single branch
corresponds to firstStageCost in abstract model
-
computeAreaCostGen
(model, c)¶ compute capital costs for new generator capacity
-
computeAreaEmissions
(model, c, phase, cost=False)¶ compute total emissions from a load/country
-
computeAreaPrice
(model, area, t, phase)¶ cumpute the approximate area price based on max marginal cost
-
computeAreaRES
(model, j, phase, shareof)¶ compute renewable share of demand or total generation capacity
-
computeAreaWelfare
(model, c, t, phase)¶ compute social welfare for a given area and time step
- Returns: Welfare, ProducerSurplus, ConsumerSurplus,
- CongestionRent, IMport, eXport
-
computeCostBranch
(model, b, include_om=False)¶ Investment cost of single branch
corresponds to firstStageCost in abstract model
-
computeCostGenerator
(model, g, include_om=False)¶ Investment cost of generator
-
computeCostNode
(model, n, include_om=False)¶ Investment cost of single node
corresponds to cost in abstract model
-
computeDemand
(model, c, t)¶ compute demand at specified load ant time
-
computeGenerationCost
(model, g, phase)¶ compute NPV cost of generation (+ curtailment and CO2 emissions)
This corresponds to secondStageCost in abstract model
-
createConcreteModel
(dict_data)¶ Create Concrete Pyomo model for PowerGIM
Parameters: dict_data : dictionary
dictionary containing the model data. This can be created with the createModelData(...) method
Returns: Concrete pyomo model
-
createModelData
(grid_data, datafile, maxNewBranchNum, maxNewBranchCap)¶ Create model data in dictionary format
Parameters: grid_data : powergama.GridData object
contains grid model
datafile : string
name of XML file containing additional parameters
maxNewBranchNum : int
upper limit on parallel branches to consider (e.g. 10)
maxNewBranchCap : float (MW)
upper limit on new capacity to consider (e.g. 10000)
Returns: dictionary with pyomo data (in pyomo format)
-
createScenarioTreeModel
(num_scenarios)¶ Generate model instance with data. Alternative to .dat files
Parameters: num_scenarios : int
number of scenarios. Each with the same probability
Returns: PySP scenario tree model
This method may be called by “pysp_scenario_tree_model_callback()” in
the model input file instead of using input .dat files
-
extractResultingGridData
(grid_data, model=None, file_ph=None, stage=1, scenario=None)¶ Extract resulting optimal grid layout from simulation results
Parameters: grid_data : powergama.GridData
grid data class
model : Pyomo model
concrete instance of optimisation model containing det. results
file_ph : string
CSV file containing results from stochastic solution
stage : int
Which stage to extract data for (1 or 2). 1: only stage one investments included (default) 2: both stage one and stage two investments included
scenario : int
which stage 2 scenario to get data for (only relevant when stage=2)
Use either model or file_ph parameter
Returns: GridData object reflecting optimal solution
-
saveDeterministicResults
(model, excel_file)¶ export results to excel file
Parameters: model : Pyomo model
concrete instance of optimisation model
excel_file : string
name of Excel file to create
-
writeStochasticProblem
(path, dict_data)¶ create input files for solving stochastic problem
Parameters: path : string
Where to put generated files
dict_data : dictionary
Pyomo data model in dictionary format. Output from createModelData method
Returns: string that can be written to .dat file (reference model data)
-
-
powergama.powergim.
annuityfactor
(rate, years)¶ Net present value factor for fixed payments per year at fixed rate
-
powergama.powergim.
sampleProfileData
(data, samplesize, sampling_method)¶ Sample data from full-year time series
Parameters: X : matrix
data matrix to sample from
samplesize : int
size of sample
sampling_method : str
‘kmeans’, ‘lhs’, ‘uniform’, (‘mmatching’, ‘meanshift’)
Returns:
reduced data matrix according to sample size and method
-
powergama.powergim.
sample_kmeans
(X, samplesize)¶ K-means sampling
Parameters: X : matrix
data matrix to sample from
samplesize : int
size of sample
This method relies on sklearn.cluster.KMeans
-
powergama.powergim.
sample_latinhypercube
(X, samplesize)¶ Latin hypercube sampling
Parameters: X : matrix
data matrix to sample from
samplesize : int
size of sample
This method relies on pyDOE.lhs(n, [samples, criterion, iterations])
-
powergama.powergim.
sample_meanshift
(X, samplesize)¶ M matching sampling
Parameters: X : matrix
data matrix to sample from
samplesize : int
size of sample
This method relies on sklearn.cluster.MeanShift
It is a centroid-based algorithm, which works by updating candidates
for centroids to be the mean of the points within a given region.
These candidates are then filtered in a post-processing stage to
eliminate near-duplicates to form the final set of centroids.
-
powergama.powergim.
sample_mmatching
(X, samplesize)¶ The idea is to make e.g. 10000 randomsample-sets of size=samplesize from the originial datasat X.
Choose the sampleset with the lowest objective: MINIMIZE [(meanSample - meanX)^2 + (stdvSample - stdvX)^2...]
in terms of stitistical measures