google.cloud.vision.v1.image_annotator_pb2¶
Classes
AnnotateImageRequest |
Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features. |
AnnotateImageResponse |
Response to an image annotation request. |
BatchAnnotateImagesRequest |
Multiple image annotation requests are batched into a single service call. |
BatchAnnotateImagesResponse |
Response to a batch image annotation request. |
ColorInfo |
Color information consists of RGB channels, score and fraction of image the color occupies in the image. |
DominantColorsAnnotation |
Set of dominant colors and their corresponding scores. |
EntityAnnotation |
Set of detected entity features. |
FaceAnnotation |
A face annotation object contains the results of face detection. |
Feature |
The <em>Feature</em> indicates what type of image detection task to perform. |
Image |
Client image to perform Google Cloud Vision API tasks over. |
ImageContext |
Image context. |
ImageProperties |
Stores image properties (e.g. |
ImageSource |
External image source (Google Cloud Storage image location). |
LatLongRect |
Rectangle determined by min and max LatLng pairs. |
LocationInfo |
Detected entity location information. |
Property |
Arbitrary name/value pair. |
SafeSearchAnnotation |
Set of features pertaining to the image, computed by various computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence). |
-
class
google.cloud.vision.v1.image_annotator_pb2.
AnnotateImageRequest
[source]¶ Request for performing Google Cloud Vision API tasks over a user-provided image, with user-requested features.
-
image
¶ google.cloud.vision.v1.image_annotator_pb2.Image
– The image to be processed.
-
features
¶ list[
google.cloud.vision.v1.image_annotator_pb2.Feature
] – Requested features.
-
image_context
¶ google.cloud.vision.v1.image_annotator_pb2.ImageContext
– Additional context that may accompany the image.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
AnnotateImageResponse
[source]¶ Response to an image annotation request.
-
face_annotations
¶ list[
google.cloud.vision.v1.image_annotator_pb2.FaceAnnotation
] – If present, face detection completed successfully.
-
landmark_annotations
¶ list[
google.cloud.vision.v1.image_annotator_pb2.EntityAnnotation
] – If present, landmark detection completed successfully.
-
logo_annotations
¶ list[
google.cloud.vision.v1.image_annotator_pb2.EntityAnnotation
] – If present, logo detection completed successfully.
-
label_annotations
¶ list[
google.cloud.vision.v1.image_annotator_pb2.EntityAnnotation
] – If present, label detection completed successfully.
-
text_annotations
¶ list[
google.cloud.vision.v1.image_annotator_pb2.EntityAnnotation
] – If present, text (OCR) detection completed successfully.
-
safe_search_annotation
¶ google.cloud.vision.v1.image_annotator_pb2.SafeSearchAnnotation
– If present, safe-search annotation completed successfully.
-
image_properties_annotation
¶ google.cloud.vision.v1.image_annotator_pb2.ImageProperties
– If present, image properties were extracted successfully.
-
error
¶ google.rpc.status_pb2.Status
– If set, represents the error message for the operation. Note that filled-in mage annotations are guaranteed to be correct, even when <code>error</code> is non-empty.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
BatchAnnotateImagesRequest
[source]¶ Multiple image annotation requests are batched into a single service call.
-
requests
¶ list[
google.cloud.vision.v1.image_annotator_pb2.AnnotateImageRequest
] – Individual image annotation requests for this batch.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
BatchAnnotateImagesResponse
[source]¶ Response to a batch image annotation request.
-
responses
¶ list[
google.cloud.vision.v1.image_annotator_pb2.AnnotateImageResponse
] – Individual responses to image annotation requests within the batch.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
ColorInfo
[source]¶ Color information consists of RGB channels, score and fraction of image the color occupies in the image.
-
color
¶ google.type.color_pb2.Color
– RGB components of the color.
-
score
¶ float – Image-specific score for this color. Value in range [0, 1].
-
pixel_fraction
¶ float – Stores the fraction of pixels the color occupies in the image. Value in range [0, 1].
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
DominantColorsAnnotation
[source]¶ Set of dominant colors and their corresponding scores.
-
colors
¶ list[
google.cloud.vision.v1.image_annotator_pb2.ColorInfo
] – RGB color values, with their score and pixel fraction.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
EntityAnnotation
[source]¶ Set of detected entity features.
-
mid
¶ string – Opaque entity ID. Some IDs might be available in Knowledge Graph(KG). For more details on KG please see: https://developers.google.com/knowledge-graph/
-
locale
¶ string – The language code for the locale in which the entity textual <code>description</code> (next field) is expressed.
-
description
¶ string – Entity textual description, expressed in its <code>locale</code> language.
-
score
¶ float – Overall score of the result. Range [0, 1].
-
confidence
¶ float – The accuracy of the entity detection in an image. For example, for an image containing ‘Eiffel Tower,’ this field represents the confidence that there is a tower in the query image. Range [0, 1].
-
topicality
¶ float – The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of ‘tower’ to an image containing ‘Eiffel Tower’ is likely higher than an image containing a distant towering building, though the confidence that there is a tower may be the same. Range [0, 1].
-
bounding_poly
¶ google.cloud.vision.v1.geometry_pb2.BoundingPoly
– Image region to which this entity belongs. Not filled currently forLABEL_DETECTION
features. ForTEXT_DETECTION
(OCR),boundingPoly
s are produced for the entire text detected in an image region, followed byboundingPoly
s for each word within the detected text.
-
locations
¶ list[
google.cloud.vision.v1.image_annotator_pb2.LocationInfo
] – The location information for the detected entity. Multiple <code>LocationInfo</code> elements can be present since one location may indicate the location of the scene in the query image, and another the location of the place where the query image was taken. Location information is usually present for landmarks.
-
properties
¶ list[
google.cloud.vision.v1.image_annotator_pb2.Property
] – Some entities can have additional optional <code>Property</code> fields. For example a different kind of score or string that qualifies the entity.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
FaceAnnotation
[source]¶ A face annotation object contains the results of face detection.
-
bounding_poly
¶ google.cloud.vision.v1.geometry_pb2.BoundingPoly
– The bounding polygon around the face. The coordinates of the bounding box are in the original image’s scale, as returned in ImageParams. The bounding box is computed to “frame” the face in accordance with human expectations. It is based on the landmarker results. Note that one or more x and/or y coordinates may not be generated in the BoundingPoly (the polygon will be unbounded) if only a partial face appears in the image to be annotated.
-
fd_bounding_poly
¶ google.cloud.vision.v1.geometry_pb2.BoundingPoly
– This bounding polygon is tighter than the previous <code>boundingPoly</code>, and encloses only the skin part of the face. Typically, it is used to eliminate the face from any image analysis that detects the “amount of skin” visible in an image. It is not based on the landmarker results, only on the initial face detection, hence the <code>fd</code> (face detection) prefix.
-
landmarks
¶ list[
google.cloud.vision.v1.image_annotator_pb2.FaceAnnotation.Landmark
] – Detected face landmarks.
-
roll_angle
¶ float – Roll angle. Indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical, about the axis perpendicular to the face. Range [-180,180].
-
pan_angle
¶ float – Yaw angle. Indicates the leftward/rightward angle that the face is pointing, relative to the vertical plane perpendicular to the image. Range [-180,180].
-
tilt_angle
¶ float – Pitch angle. Indicates the upwards/downwards angle that the face is pointing relative to the image’s horizontal plane. Range [-180,180].
-
detection_confidence
¶ float – Detection confidence. Range [0, 1].
-
landmarking_confidence
¶ float – Face landmarking confidence. Range [0, 1].
-
joy_likelihood
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Joy likelihood.
-
sorrow_likelihood
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Sorrow likelihood.
-
anger_likelihood
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Anger likelihood.
-
surprise_likelihood
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Surprise likelihood.
-
under_exposed_likelihood
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Under-exposed likelihood.
-
blurred_likelihood
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Blurred likelihood.
-
headwear_likelihood
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Headwear likelihood.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
Feature
[source]¶ The <em>Feature</em> indicates what type of image detection task to perform. Users describe the type of Google Cloud Vision API tasks to perform over images by using <em>Feature</em>s. Features encode the Cloud Vision API vertical to operate on and the number of top-scoring results to return.
-
type
¶ google.cloud.vision.v1.image_annotator_pb2.Feature.Type
– The feature type.
-
max_results
¶ int – Maximum number of results of this type.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
Image
[source]¶ Client image to perform Google Cloud Vision API tasks over.
-
content
¶ bytes – Image content, represented as a stream of bytes. Note: as with all
bytes
fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
-
source
¶ google.cloud.vision.v1.image_annotator_pb2.ImageSource
– Google Cloud Storage image location. If both ‘content’ and ‘source’ are filled for an image, ‘content’ takes precedence and it will be used for performing the image annotation request.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
ImageContext
[source]¶ Image context.
-
lat_long_rect
¶ google.cloud.vision.v1.image_annotator_pb2.LatLongRect
– Lat/long rectangle that specifies the location of the image.
-
language_hints
¶ list[string] – List of languages to use for TEXT_DETECTION. In most cases, an empty value yields the best results since it enables automatic language detection. For languages based on the Latin alphabet, setting
language_hints
is not needed. In rare cases, when the language of the text in the image is known, setting a hint will help get better results (although it will be a significant hindrance if the hint is wrong). Text detection returns an error if one or more of the specified languages is not one of the supported languages.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
ImageProperties
[source]¶ Stores image properties (e.g. dominant colors).
-
dominant_colors
¶ google.cloud.vision.v1.image_annotator_pb2.DominantColorsAnnotation
– If present, dominant colors completed successfully.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
ImageSource
[source]¶ External image source (Google Cloud Storage image location).
-
gcs_image_uri
¶ string – Google Cloud Storage image URI. It must be in the following form:
gs://bucket_name/object_name
. For more details, please see: https://cloud.google.com/storage/docs/reference-uris. NOTE: Cloud Storage object versioning is not supported!
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
LatLongRect
[source]¶ Rectangle determined by min and max LatLng pairs.
-
min_lat_lng
¶ google.type.latlng_pb2.LatLng
– Min lat/long pair.
-
max_lat_lng
¶ google.type.latlng_pb2.LatLng
– Max lat/long pair.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
LocationInfo
[source]¶ Detected entity location information.
-
lat_lng
¶ google.type.latlng_pb2.LatLng
– Lat - long location coordinates.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
Property
[source]¶ Arbitrary name/value pair.
-
name
¶ string – Name of the property.
-
value
¶ string – Value of the property.
-
-
class
google.cloud.vision.v1.image_annotator_pb2.
SafeSearchAnnotation
[source]¶ Set of features pertaining to the image, computed by various computer vision methods over safe-search verticals (for example, adult, spoof, medical, violence).
-
adult
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Represents the adult contents likelihood for the image.
-
spoof
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Spoof likelihood. The likelihood that an obvious modification was made to the image’s canonical version to make it appear funny or offensive.
-
medical
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Likelihood this is a medical image.
-
violence
¶ google.cloud.vision.v1.image_annotator_pb2.Likelihood
– Violence likelihood.
-