jujubigdata.relations¶
jujubigdata.relations.DataNode |
Relation which communicates DataNode info back to NameNodes. |
jujubigdata.relations.EtcHostsRelation |
|
jujubigdata.relations.FlumeAgent |
|
jujubigdata.relations.Ganglia |
|
jujubigdata.relations.HBase |
|
jujubigdata.relations.HadoopPlugin |
This helper class manages the hadoop-plugin interface, and is the recommended way of interacting with the endpoint via this interface. |
jujubigdata.relations.HadoopREST |
This helper class manages the hadoop-rest interface, and is the recommended way of interacting with the endpoint via this interface. |
jujubigdata.relations.Hive |
|
jujubigdata.relations.Kafka |
|
jujubigdata.relations.MySQL |
|
jujubigdata.relations.NameNode |
Relation which communicates the NameNode (HDFS) connection & status info. |
jujubigdata.relations.NameNodeMaster |
Alternate NameNode relation for DataNodes. |
jujubigdata.relations.NodeManager |
Relation which communicates NodeManager info back to ResourceManagers. |
jujubigdata.relations.ResourceManager |
Relation which communicates the ResourceManager (YARN) connection & status info. |
jujubigdata.relations.ResourceManagerMaster |
Alternate ResourceManager relation for NodeManagers. |
jujubigdata.relations.SSHRelation |
|
jujubigdata.relations.SecondaryNameNode |
Relation which communicates SecondaryNameNode info back to NameNodes. |
jujubigdata.relations.Spark |
|
jujubigdata.relations.SpecMatchingRelation |
Relation base class that validates that a version and environment between two related charms match, to prevent interoperability issues. |
jujubigdata.relations.Zookeeper |
-
class
jujubigdata.relations.DataNode(spec=None, *args, **kwargs)¶ Bases:
jujubigdata.relations.SpecMatchingRelationRelation which communicates DataNode info back to NameNodes.
-
provide(remote_service, all_ready)¶
-
relation_name= 'datanode'¶
-
required_keys= ['private-address', 'hostname']¶
-
-
class
jujubigdata.relations.EtcHostsRelation(*args, **kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.Relation-
am_i_registered()¶
-
provide(remote_service, all_ready)¶
-
register_connected_hosts()¶
-
register_provided_hosts()¶
-
-
class
jujubigdata.relations.FlumeAgent(port=None, *args, **kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.Relation-
provide(remote_service, all_ready)¶
-
relation_name= 'flume-agent'¶
-
required_keys= ['private-address', 'port']¶
-
-
class
jujubigdata.relations.Ganglia(**kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.Relation-
host()¶
-
relation_name= 'ganglia'¶
-
required_keys= ['private-address']¶
-
-
class
jujubigdata.relations.HBase(master=None, region=None, *args, **kwargs)¶ Bases:
jujubigdata.relations.SSHRelation-
provide(remote_service, all_ready)¶
-
relation_name= 'hbase'¶
-
required_keys= ['private-address', 'master-port', 'region-port', 'ssh-key']¶
-
-
class
jujubigdata.relations.HadoopPlugin(hdfs_only=False, *args, **kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.RelationThis helper class manages the
hadoop-plugininterface, and is the recommended way of interacting with the endpoint via this interface.Charms using this interface will have a JRE installed, the Hadoop API Java libraries installed, the Hadoop configuration managed in
/etc/hadoop/conf, and the environment configured in/etc/environment. The endpoint will ensure that the distribution, version, Java, etc. are all compatible to ensure a properly functioning Hadoop ecosystem.Charms using this interface can call
is_ready()(orhdfs_is_ready()) to determine if this relation is ready to use.-
hdfs_is_ready()¶ Check if the Hadoop libraries and installed and configured and HDFS is connected and ready to handle work (at least one DataNode available).
(This is a synonym for
is_ready().)
-
is_ready()¶
-
provide(remote_service, all_ready)¶ Used by the endpoint to provide the
required_keys.
-
relation_name= 'hadoop-plugin'¶
-
required_keys= ['yarn-ready', 'hdfs-ready']¶ These keys will be set on the relation once everything is installed, configured, connected, and ready to receive work. They can be checked by calling
is_ready(), or manually via Juju’srelation-get.
-
-
class
jujubigdata.relations.HadoopREST(**kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.RelationThis helper class manages the
hadoop-restinterface, and is the recommended way of interacting with the endpoint via this interface.Charms using this interface are provided with the API endpoint information for the NameNode, ResourceManager, and JobHistoryServer.
-
hdfs_port¶ Property containing the HDFS port, or
Noneif not available.
-
hdfs_uri¶ Property containing the full HDFS URI, or
Noneif not available.
-
historyserver_host¶ Property containing the HistoryServer host, or
Noneif not available.
-
historyserver_port¶ Property containing the HistoryServer port, or
Noneif not available.
-
historyserver_uri¶ Property containing the full JobHistoryServer API URI, or
Noneif not available.
-
namenode_host¶ Property containing the NameNode host, or
Noneif not available.
-
provide(remote_service, all_ready)¶ Used by the endpoint to provide the
required_keys.
-
relation_name= 'hadoop-rest'¶
-
required_keys= ['namenode-host', 'hdfs-port', 'webhdfs-port', 'resourcemanager-host', 'resourcemanager-port', 'historyserver-host', 'historyserver-port']¶
-
resourcemanager_host¶ Property containing the ResourceManager host, or
Noneif not available.
-
resourcemanager_port¶ Property containing the ResourceManager port, or
Noneif not available.
-
resourcemanager_uri¶ Property containing the full ResourceManager API URI, or
Noneif not available.
-
webhdfs_port¶ Property containing the WebHDFS port, or
Noneif not available.
-
webhdfs_uri¶ Property containing the full WebHDFS URI, or
Noneif not available.
-
-
class
jujubigdata.relations.Hive(port=None, *args, **kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.Relation-
provide(remote_service, all_ready)¶
-
relation_name= 'hive'¶
-
required_keys= ['private-address', 'port', 'ready']¶
-
-
class
jujubigdata.relations.Kafka(port=None, *args, **kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.Relation-
provide(remote_service, all_ready)¶
-
relation_name= 'kafka'¶
-
required_keys= ['private-address', 'port']¶
-
-
class
jujubigdata.relations.MySQL(**kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.Relation-
relation_name= 'db'¶
-
required_keys= ['host', 'database', 'user', 'password']¶
-
-
class
jujubigdata.relations.NameNode(spec=None, port=None, webhdfs_port=None, *args, **kwargs)¶ Bases:
jujubigdata.relations.SpecMatchingRelation,jujubigdata.relations.EtcHostsRelationRelation which communicates the NameNode (HDFS) connection & status info.
This is the relation that clients should use.
-
has_slave()¶ Check if the NameNode has any DataNode slaves registered. This reflects if HDFS is ready without having to wait for utils.wait_for_hdfs.
-
is_ready()¶
-
provide(remote_service, all_ready)¶
-
relation_name= 'namenode'¶
-
require_slave= True¶
-
required_keys= ['private-address', 'has_slave', 'port', 'webhdfs-port']¶
-
-
class
jujubigdata.relations.NameNodeMaster(spec=None, port=None, webhdfs_port=None, *args, **kwargs)¶ Bases:
jujubigdata.relations.NameNode,jujubigdata.relations.SSHRelationAlternate NameNode relation for DataNodes.
-
relation_name= 'datanode'¶
-
require_slave= False¶
-
ssh_user= 'hdfs'¶
-
-
class
jujubigdata.relations.NodeManager(**kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.RelationRelation which communicates NodeManager info back to ResourceManagers.
-
provide(remote_service, all_ready)¶
-
relation_name= 'nodemanager'¶
-
required_keys= ['private-address', 'hostname']¶
-
-
class
jujubigdata.relations.ResourceManager(spec=None, port=None, historyserver_http=None, historyserver_ipc=None, *args, **kwargs)¶ Bases:
jujubigdata.relations.SpecMatchingRelation,jujubigdata.relations.EtcHostsRelationRelation which communicates the ResourceManager (YARN) connection & status info.
This is the relation that clients should use.
-
has_slave()¶ Check if the ResourceManager has any NodeManager slaves registered.
-
is_ready()¶
-
provide(remote_service, all_ready)¶
-
relation_name= 'resourcemanager'¶
-
require_slave= True¶
-
required_keys= ['private-address', 'has_slave', 'historyserver-http', 'historyserver-ipc', 'port']¶
-
-
class
jujubigdata.relations.ResourceManagerMaster(spec=None, port=None, historyserver_http=None, historyserver_ipc=None, *args, **kwargs)¶ Bases:
jujubigdata.relations.ResourceManager,jujubigdata.relations.SSHRelationAlternate ResourceManager relation for NodeManagers.
-
relation_name= 'nodemanager'¶
-
require_slave= False¶
-
ssh_user= 'yarn'¶
-
-
class
jujubigdata.relations.SSHRelation(*args, **kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.Relation-
install_ssh_keys()¶
-
provide(remote_service, all_ready)¶
-
ssh_user= 'ubuntu'¶
-
-
class
jujubigdata.relations.SecondaryNameNode(spec=None, port=None, *args, **kwargs)¶ Bases:
jujubigdata.relations.SpecMatchingRelationRelation which communicates SecondaryNameNode info back to NameNodes.
-
provide(remote_service, all_ready)¶
-
relation_name= 'secondary'¶
-
required_keys= ['private-address', 'hostname', 'port']¶
-
-
class
jujubigdata.relations.Spark(**kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.Relation-
provide(remote_service, all_ready)¶
-
relation_name= 'spark'¶
-
required_keys= ['ready']¶
-
-
class
jujubigdata.relations.SpecMatchingRelation(spec=None, *args, **kwargs)¶ Bases:
charmhelpers.core.charmframework.helpers.RelationRelation base class that validates that a version and environment between two related charms match, to prevent interoperability issues.
This class adds a
speckey to therequired_keysand populates it inprovide(). Thespecvalue must be passed in to__init__().The
specshould be a mapping (or a callback that returns a mapping) which describes all aspects of the charm’s environment or configuration that might affect its interoperability with the remote charm. The charm on the requires side of the relation will verify that all of the keys in itsspecare present and exactly equal on the provides side of the relation. This does mean that the requires side can be a subset of the provides side, but not the other way around.An example spec string might be:
{ 'arch': 'x86_64', 'vendor': 'apache', 'version': '2.4', }
-
filtered_data(remote_service=None)¶
-
is_ready()¶ Validate the
specdata from the connected units to ensure that it matches the localspec.
-
provide(remote_service, all_ready)¶ Provide the
specdata to the remote service.Subclasses must either delegate to this method (e.g., via super()) or include
'spec': json.dumps(self.spec)in the provided data themselves.
-
spec¶
-