SDK Client
- class groundlight.Groundlight(endpoint: str | None = None, api_token: str | None = None, disable_tls_verification: bool | None = None)
Client for accessing the Groundlight cloud service.
The API token (auth) is specified through the GROUNDLIGHT_API_TOKEN environment variable by default.
Example usage:
gl = Groundlight() detector = gl.get_or_create_detector( name="door", query="Is the door locked?", confidence_threshold=0.9 ) image_query = gl.submit_image_query( detector=detector, image="path/to/image.jpeg", wait=0.0, human_review="ALWAYS" ) print(f"Image query confidence = {image_query.result.confidence}") # Poll the backend service for a confident answer image_query = gl.wait_for_confident_result( image_query=image_query, confidence_threshold=0.9, timeout_sec=60.0 ) # Examine new confidence after a continuously trained ML model has re-evaluated the image query print(f"Image query confidence = {image_query.result.confidence}")
- Parameters:
endpoint (str | None) –
api_token (str | None) –
disable_tls_verification (bool | None) –
- __init__(endpoint: str | None = None, api_token: str | None = None, disable_tls_verification: bool | None = None) None
Constructs a Groundlight client.
- Parameters:
endpoint (str | None) – optionally specify a different endpoint
api_token (str | None) – use this API token for your API calls. If unset, fallback to the environment variable “GROUNDLIGHT_API_TOKEN”.
disable_tls_verification (Optional[bool]) –
Set this to True to skip verifying SSL/TLS certificates when calling API from https server. If unset, the fallback will check the environment variable “DISABLE_TLS_VERIFY” for 1 or 0. By default, certificates are verified.
Disable verification if using a self-signed TLS certificate with a Groundlight Edge Endpoint. It is unadvised to disable verification if connecting directly to the Groundlight cloud service.
- Returns:
Groundlight client
- Return type:
None
- add_label(image_query: ImageQuery | str, label: Label | str, rois: List[ROI] | str | None = None)
Add a new label to an image query. This answers the detector’s question.
- Parameters:
image_query (ImageQuery | str) – Either an ImageQuery object (returned from ask_ml or similar method) or an image_query id as a string.
label (Label | str) – The string “YES” or the string “NO” in answer to the query.
rois (List[ROI] | str | None) – An option list of regions of interest (ROIs) to associate with the label. (This feature experimental)
- Returns:
None
- ask_async(detector: Detector | str, image: str | bytes | Image | BytesIO | BufferedReader | UnavailableModule, patience_time: float | None = None, confidence_threshold: float | None = None, human_review: str | None = None, metadata: dict | str | None = None, inspection_id: str | None = None) ImageQuery
Convenience method for submitting an ImageQuery asynchronously. This is equivalent to calling submit_image_query with want_async=True and wait=0. Use get_image_query to retrieve the result of the ImageQuery.
- Parameters:
detector (Detector | str) – the Detector object, or string id of a detector like det_12345
image (str | bytes | Image | BytesIO | BufferedReader | UnavailableModule) –
The image, in several possible formats:
filename (string) of a jpeg file
byte array or BytesIO or BufferedReader with jpeg bytes
numpy array with values 0-255 and dimensions (H,W,3) in BGR order (Note OpenCV uses BGR not RGB. img[:, :, ::-1] will reverse the channels)
PIL Image: Any binary format must be JPEG-encoded already. Any pixel format will get converted to JPEG at high quality before sending to service.
patience_time (float | None) – How long to wait (in seconds) for a confident answer for this image query. The longer the patience_time, the more likely Groundlight will arrive at a confident answer. Within patience_time, Groundlight will update ML predictions based on stronger findings, and, additionally, Groundlight will prioritize human review of the image query if necessary. This is a soft server-side timeout. If not set, use the detector’s patience_time.
confidence_threshold (float | None) – The confidence threshold to wait for. If not set, use the detector’s confidence threshold.
human_review (str | None) – If None or DEFAULT, send the image query for human review only if the ML prediction is not confident. If set to ALWAYS, always send the image query for human review. If set to NEVER, never send the image query for human review.
inspection_id (str | None) – Most users will omit this. For accounts with Inspection Reports enabled, this is the ID of the inspection to associate with the image query.
metadata (dict | str | None) – A dictionary or JSON string of custom key/value metadata to associate with the image query (limited to 1KB). You can retrieve this metadata later by calling get_image_query().
- Returns:
ImageQuery
- Return type:
Example usage:
gl = Groundlight() detector = gl.get_or_create_detector( name="door", query="Is the door locked?", confidence_threshold=0.9 ) image_query = gl.ask_async( detector=detector, image="path/to/image.jpeg") # the image_query will have an id for later retrieval assert image_query.id is not None # Do not attempt to access the result of this query as the result for all async queries # will be None. Your result is being computed asynchronously and will be available later assert image_query.result is None # retrieve the result later or on another machine by calling gl.wait_for_confident_result() # with the id of the image_query above. This will block until the result is available. image_query = gl.wait_for_confident_result(image_query.id) # now the result will be available for your use assert image_query.result is not None # alternatively, you can check if the result is available (without blocking) by calling # gl.get_image_query() with the id of the image_query above. image_query = gl.get_image_query(image_query.id)
- ask_confident(detector: Detector | str, image: str | bytes | Image | BytesIO | BufferedReader | UnavailableModule, confidence_threshold: float | None = None, wait: float | None = None, metadata: dict | str | None = None, inspection_id: str | None = None) ImageQuery
Evaluates an image with Groundlight waiting until an answer above the confidence threshold of the detector is reached or the wait period has passed.
- Parameters:
detector (Detector | str) – the Detector object, or string id of a detector like det_12345
image (str | bytes | Image | BytesIO | BufferedReader | UnavailableModule) –
The image, in several possible formats: - filename (string) of a jpeg file - byte array or BytesIO or BufferedReader with jpeg bytes - numpy array with values 0-255 and dimensions (H,W,3) in BGR order
(Note OpenCV uses BGR not RGB. img[:, :, ::-1] will reverse the channels)
PIL Image
Any binary format must be JPEG-encoded already. Any pixel format will get converted to JPEG at high quality before sending to service.
confidence_threshold (float | None) – The confidence threshold to wait for. If not set, use the detector’s confidence threshold.
wait (float | None) – How long to wait (in seconds) for a confident answer.
metadata (dict | str | None) – A dictionary or JSON string of custom key/value metadata to associate with the image query (limited to 1KB). You can retrieve this metadata later by calling get_image_query().
inspection_id (str | None) –
- Returns:
ImageQuery
- Return type:
- ask_ml(detector: Detector | str, image: str | bytes | Image | BytesIO | BufferedReader | UnavailableModule, wait: float | None = None, metadata: dict | str | None = None, inspection_id: str | None = None) ImageQuery
Evaluates an image with Groundlight, getting the first answer Groundlight can provide.
- Parameters:
detector (Detector | str) – the Detector object, or string id of a detector like det_12345
image (str | bytes | Image | BytesIO | BufferedReader | UnavailableModule) –
The image, in several possible formats: - filename (string) of a jpeg file - byte array or BytesIO or BufferedReader with jpeg bytes - numpy array with values 0-255 and dimensions (H,W,3) in BGR order
(Note OpenCV uses BGR not RGB. img[:, :, ::-1] will reverse the channels)
PIL Image
Any binary format must be JPEG-encoded already. Any pixel format will get converted to JPEG at high quality before sending to service.
wait (float | None) – How long to wait (in seconds) for any answer.
metadata (dict | str | None) – A dictionary or JSON string of custom key/value metadata to associate with the image query (limited to 1KB). You can retrieve this metadata later by calling get_image_query().
inspection_id (str | None) –
- Returns:
ImageQuery
- Return type:
- create_detector(name: str, query: str, *, group_name: str | None = None, confidence_threshold: float | None = None, patience_time: float | None = None, pipeline_config: str | None = None, metadata: dict | str | None = None) Detector
Create a new detector with a given name and query
- Parameters:
name (str) – the detector name
query (str) – the detector query
group_name (str | None) – the detector group that the new detector should belong to. If none, defaults to default_group
confidence_threshold (float | None) – the confidence threshold
patience_time (float | None) – the patience time, or how long Groundlight should work to generate a confident answer. Defaults to 30 seconds.
pipeline_config (str | None) – the pipeline config
metadata (dict | str | None) – A dictionary or JSON string of custom key/value metadata to associate with the detector (limited to 1KB). You can retrieve this metadata later by calling get_detector().
- Returns:
Detector
- Return type:
- get_detector_by_name(name: str) Detector
Get a detector by name
- Parameters:
name (str) – the detector name
- Returns:
Detector
- Return type:
- get_image_query(id: str) ImageQuery
Get an image query by id
- Parameters:
id (str) – the image query id
- Returns:
ImageQuery
- Return type:
- get_or_create_detector(name: str, query: str, *, group_name: str | None = None, confidence_threshold: float | None = None, pipeline_config: str | None = None, metadata: dict | str | None = None) Detector
Tries to look up the detector by name. If a detector with that name, query, and confidence exists, return it. Otherwise, create a detector with the specified query and config.
- Parameters:
name (str) – the detector name
query (str) – the detector query
group_name (str | None) – the detector group that the new detector should belong to
confidence_threshold (float | None) – the confidence threshold
pipeline_config (str | None) – the pipeline config
metadata (dict | str | None) – A dictionary or JSON string of custom key/value metadata to associate with the detector (limited to 1KB). You can retrieve this metadata later by calling get_detector().
- Returns:
Detector
- Return type:
- list_detectors(page: int = 1, page_size: int = 10) PaginatedDetectorList
List out detectors you own
- Parameters:
page (int) – the page number
page_size (int) – the page size
- Returns:
PaginatedDetectorList
- Return type:
- list_image_queries(page: int = 1, page_size: int = 10) PaginatedImageQueryList
List out image queries you own
- Parameters:
page (int) – the page number
page_size (int) – the page size
- Returns:
PaginatedImageQueryList
- Return type:
- start_inspection() str
NOTE: For users with Inspection Reports enabled only. Starts an inspection report and returns the id of the inspection.
- Returns:
The unique identifier of the inspection.
- Return type:
str
- stop_inspection(inspection_id: str) str
NOTE: For users with Inspection Reports enabled only. Stops an inspection and raises an exception if the response from the server indicates that the inspection was not successfully stopped.
- Parameters:
inspection_id (str) – The unique identifier of the inspection.
- Returns:
“PASS” or “FAIL” depending on the result of the inspection.
- Return type:
str
- submit_image_query(detector: Detector | str, image: str | bytes | Image | BytesIO | BufferedReader | UnavailableModule, wait: float | None = None, patience_time: float | None = None, confidence_threshold: float | None = None, human_review: str | None = None, want_async: bool = False, inspection_id: str | None = None, metadata: dict | str | None = None) ImageQuery
Evaluates an image with Groundlight.
- Parameters:
detector (Detector | str) – the Detector object, or string id of a detector like det_12345
image (str | bytes | Image | BytesIO | BufferedReader | UnavailableModule) –
The image, in several possible formats: - filename (string) of a jpeg file - byte array or BytesIO or BufferedReader with jpeg bytes - numpy array with values 0-255 and dimensions (H,W,3) in BGR order
(Note OpenCV uses BGR not RGB. img[:, :, ::-1] will reverse the channels)
PIL Image: Any binary format must be JPEG-encoded already. Any pixel format will get converted to JPEG at high quality before sending to service.
wait (float | None) – How long to poll (in seconds) for a confident answer. This is a client-side timeout.
patience_time (float | None) – How long to wait (in seconds) for a confident answer for this image query. The longer the patience_time, the more likely Groundlight will arrive at a confident answer. Within patience_time, Groundlight will update ML predictions based on stronger findings, and, additionally, Groundlight will prioritize human review of the image query if necessary. This is a soft server-side timeout. If not set, use the detector’s patience_time.
confidence_threshold (float | None) – The confidence threshold to wait for. If not set, use the detector’s confidence threshold.
human_review (str | None) – If None or DEFAULT, send the image query for human review only if the ML prediction is not confident. If set to ALWAYS, always send the image query for human review. If set to NEVER, never send the image query for human review.
want_async (bool) – If True, the client will return as soon as the image query is submitted and will not wait for an ML/human prediction. The returned ImageQuery will have a result of None. Must set wait to 0 to use want_async.
inspection_id (str | None) – Most users will omit this. For accounts with Inspection Reports enabled, this is the ID of the inspection to associate with the image query.
metadata (dict | str | None) – A dictionary or JSON string of custom key/value metadata to associate with the image query (limited to 1KB). You can retrieve this metadata later by calling get_image_query().
- Returns:
ImageQuery
- Return type:
- update_detector_confidence_threshold(detector_id: str, confidence_threshold: float) None
Updates the confidence threshold of a detector given a detector_id.
- Parameters:
detector_id (str) – The id of the detector to update.
confidence_threshold (float) – The new confidence threshold for the detector.
- Return type:
None
:return None :rtype None
- update_inspection_metadata(inspection_id: str, user_provided_key: str, user_provided_value: str) None
NOTE: For users with Inspection Reports enabled only. Add/update inspection metadata with the user_provided_key and user_provided_value.
- Parameters:
inspection_id (str) – The unique identifier of the inspection.
user_provided_key (str) – the key in the key/value pair for the inspection metadata.
user_provided_value (str) – the value in the key/value pair for the inspection metadata.
- Returns:
None
- Return type:
None
- wait_for_confident_result(image_query: ImageQuery | str, confidence_threshold: float | None = None, timeout_sec: float = 30.0) ImageQuery
Waits for an image query result’s confidence level to reach the specified value. Currently this is done by polling with an exponential back-off.
- Parameters:
image_query (ImageQuery | str) – An ImageQuery object to poll
confidence_threshold (float | None) – The confidence threshold to wait for. If not set, use the detector’s confidence threshold.
timeout_sec (float) – The maximum number of seconds to wait.
- Returns:
ImageQuery
- Return type:
- wait_for_ml_result(image_query: ImageQuery | str, timeout_sec: float = 30.0) ImageQuery
Waits for the first ml result to be returned. Currently this is done by polling with an exponential back-off.
- Parameters:
image_query (ImageQuery | str) – An ImageQuery object to poll
timeout_sec (float) – The maximum number of seconds to wait.
- Returns:
ImageQuery
- Return type:
- whoami() str
Return the username associated with the API token.
- Returns:
str
- Return type:
str
- class groundlight.ExperimentalApi(endpoint: str | None = None, api_token: str | None = None)
- Parameters:
endpoint (str | None) –
api_token (str | None) –
- __init__(endpoint: str | None = None, api_token: str | None = None)
Constructs an experimental groundlight client. The experimental client inherits all the functionality of the base groundlight client, but also includes additional functionality that is still in development. Experimental functionality is subject to change.
- Parameters:
endpoint (str | None) –
api_token (str | None) –
- add_label(image_query: ImageQuery | str, label: Label | str, rois: List[ROI] | str | None = None)
Experimental version of add_label. Add a new label to an image query. This answers the detector’s question.
- Parameters:
image_query (ImageQuery | str) – Either an ImageQuery object (returned from submit_image_query) or an image_query id as a string.
label (Label | str) – The string “YES” or the string “NO” in answer to the query.
rois (List[ROI] | str | None) – An option list of regions of interest (ROIs) to associate with the label. (This feature experimental)
- Returns:
None
- create_counting_detector(name: str, query: str, *, max_count: int | None = None, group_name: str | None = None, confidence_threshold: float | None = None, patience_time: float | None = None, pipeline_config: str | None = None, metadata: dict | str | None = None) Detector
Creates a counting detector with the given name and query
- Parameters:
name (str) –
query (str) –
max_count (int | None) –
group_name (str | None) –
confidence_threshold (float | None) –
patience_time (float | None) –
pipeline_config (str | None) –
metadata (dict | str | None) –
- Return type:
- create_detector_group(name: str) DetectorGroup
Creates a detector group with the given name Note: you can specify a detector group when creating a detector without the need to create it ahead of time
- Parameters:
name (str) – the name of the detector group
- Returns:
a Detector object corresponding to the new detector group
- Return type:
DetectorGroup
- create_multiclass_detector(name: str, query: str, class_names: List[str], *, group_name: str | None = None, confidence_threshold: float | None = None, patience_time: float | None = None, pipeline_config: str | None = None, metadata: dict | str | None = None) Detector
Creates a multiclass detector with the given name and query
- Parameters:
name (str) –
query (str) –
class_names (List[str]) –
group_name (str | None) –
confidence_threshold (float | None) –
patience_time (float | None) –
pipeline_config (str | None) –
metadata (dict | str | None) –
- Return type:
- create_note(detector: str | Detector, note: str, image: str | bytes | Image | BytesIO | BufferedReader | UnavailableModule | None = None) None
Adds a note to a given detector
- Parameters:
detector (str | Detector) – the detector to add the note to
note (str) – the text content of the note
image (str | bytes | Image | BytesIO | BufferedReader | UnavailableModule | None) – a path to an image to attach to the note
- Return type:
None
- create_roi(label: str, top_left: Tuple[float, float], bottom_right: Tuple[float, float]) ROI
Adds a region of interest to the given detector NOTE: This feature is only available to Pro tier and higher If you would like to learn more, reach out to us at https://groundlight.ai
- Parameters:
label (str) – the label of the item in the roi
top_left (Tuple[float, float]) – the top left corner of the roi
bottom_right (Tuple[float, float]) – the bottom right corner of the roi
- Return type:
ROI
- create_rule(detector: str | Detector, rule_name: str, channel: str | ChannelEnum, recipient: str, *, alert_on: str | VerbEnum = 'CHANGED_TO', enabled: bool = True, include_image: bool = False, condition_parameters: dict | str | None = None, snooze_time_enabled: bool = False, snooze_time_value: int = 3600, snooze_time_unit: str = 'SECONDS', human_review_required: bool = False) Rule
Adds a notification rule to the given detector
- Parameters:
detector (str | Detector) – the detector to add the action to
rule_name (str) – a name to uniquely identify the rule
channel (str | ChannelEnum) – what channel to send the notification over. Currently EMAIL or TEXT
recipient (str) – the address or number to send the notification to
alert_on (str | VerbEnum) – what to alert on. One of ANSWERED_CONSECUTIVELY, ANSWERED_WITHIN_TIME, CHANGED_TO, NO_CHANGE, NO_QUERIES
enabled (bool) – whether the rule is enabled initially
include_image (bool) – whether to include the image in the notification
condition_parameters (dict | str | None) – additional information needed for the condition. i.e. if the condition is ANSWERED_CONSECUTIVELY, we specify num_consecutive_labels and label here
snooze_time_enabled (bool) – Whether notifications wil be snoozed, no repeat notification will be delivered until the snooze time has passed
snooze_time_value (int) – The value of the snooze time
snooze_time_unit (str) – The unit of the snooze time
huamn_review_required – If true, a cloud labeler will review and confirm alerts before they are sent
human_review_required (bool) –
- Returns:
a Rule object corresponding to the new rule
- Return type:
- delete_all_rules(detector: None | str | Detector = None) int
Deletes all rules associated with the given detector
- Parameters:
detector (None | str | Detector) – the detector to delete the rules from
- Returns:
the number of rules deleted
- Return type:
int
- delete_rule(action_id: int) None
Deletes the action with the given id
- Parameters:
action_id (int) – the id of the action to delete
- Return type:
None
- get_image(iq_id: str) bytes
Get the image associated with the given image ID If you have PIL installed, you can instantiate the pill image as PIL.Image.open(gl.get_image(iq.id))
- Parameters:
image_id – the ID of the image to get
iq_id (str) –
- Returns:
the image as a byte array
- Return type:
bytes
- get_notes(detector: str | Detector) Dict[str, Any]
Gets the notes for a given detector
- Parameters:
detector (str | Detector) – the detector to get the notes for
- Returns:
a dictionary with two keys “CUSTOMER” and “GL” to indicate who added the note to the detector, and values that are lists of notes
- Return type:
Dict[str, Any]
- get_rule(action_id: int) Rule
Gets the action with the given id
- Parameters:
action_id (int) – the id of the action to get
- Returns:
the action with the given id
- Return type:
- list_detector_groups() List[DetectorGroup]
Gets a list of all detector groups
- Returns:
a list of all detector groups
- Return type:
List[DetectorGroup]
- list_rules(page=1, page_size=10) PaginatedRuleList
Gets a list of all rules
- Returns:
a list of all rules
- Return type:
API Response Objects
- pydantic model model.Detector
Spec for serializing a detector object in the public API.
Show JSON schema
{ "title": "Detector", "description": "Spec for serializing a detector object in the public API.", "type": "object", "properties": { "id": { "description": "A unique ID for this object.", "title": "Id", "type": "string" }, "type": { "$ref": "#/$defs/DetectorTypeEnum", "description": "The type of this object." }, "created_at": { "description": "When this detector was created.", "format": "date-time", "title": "Created At", "type": "string" }, "name": { "description": "A short, descriptive name for the detector.", "maxLength": 200, "title": "Name", "type": "string" }, "query": { "description": "A question about the image.", "title": "Query", "type": "string" }, "group_name": { "description": "Which group should this detector be part of?", "title": "Group Name", "type": "string" }, "confidence_threshold": { "default": 0.9, "description": "If the detector's prediction is below this confidence threshold, send the image query for human review.", "maximum": 1.0, "minimum": 0.0, "title": "Confidence Threshold", "type": "number" }, "patience_time": { "default": 30.0, "description": "How long Groundlight will attempt to generate a confident prediction", "maximum": 3600.0, "minimum": 0.0, "title": "Patience Time", "type": "number" }, "metadata": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "description": "Metadata about the detector.", "title": "Metadata" }, "mode": { "$ref": "#/$defs/ModeEnum" }, "mode_configuration": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "title": "Mode Configuration" }, "status": { "anyOf": [ { "$ref": "#/$defs/StatusEnum" }, { "$ref": "#/$defs/BlankEnum" }, { "type": "null" } ], "default": null, "title": "Status" }, "escalation_type": { "anyOf": [ { "$ref": "#/$defs/EscalationTypeEnum" }, { "type": "null" } ], "default": null, "description": "Category that define internal proccess for labeling image queries\n\n* `STANDARD` - STANDARD\n* `NO_HUMAN_LABELING` - NO_HUMAN_LABELING" } }, "$defs": { "BlankEnum": { "const": "", "enum": [ "" ], "title": "BlankEnum", "type": "string" }, "DetectorTypeEnum": { "const": "detector", "enum": [ "detector" ], "title": "DetectorTypeEnum", "type": "string" }, "EscalationTypeEnum": { "description": "* `STANDARD` - STANDARD\n* `NO_HUMAN_LABELING` - NO_HUMAN_LABELING", "enum": [ "STANDARD", "NO_HUMAN_LABELING" ], "title": "EscalationTypeEnum", "type": "string" }, "ModeEnum": { "enum": [ "BINARY", "COUNT", "MULTI_CLASS" ], "title": "ModeEnum", "type": "string" }, "StatusEnum": { "description": "* `ON` - ON\n* `OFF` - OFF", "enum": [ "ON", "OFF" ], "title": "StatusEnum", "type": "string" } }, "required": [ "id", "type", "created_at", "name", "query", "group_name", "metadata", "mode", "mode_configuration" ] }
- Fields:
- field confidence_threshold: confloat(ge=0.0, le=1.0) = 0.9
If the detector’s prediction is below this confidence threshold, send the image query for human review.
- Constraints:
ge = 0.0
le = 1.0
- field created_at: datetime [Required]
When this detector was created.
- field escalation_type: EscalationTypeEnum | None = None
Category that define internal proccess for labeling image queries
STANDARD - STANDARD
NO_HUMAN_LABELING - NO_HUMAN_LABELING
- field group_name: str [Required]
Which group should this detector be part of?
- field id: str [Required]
A unique ID for this object.
- field metadata: Dict[str, Any] | None [Required]
Metadata about the detector.
- field mode: ModeEnum [Required]
- field mode_configuration: Dict[str, Any] | None [Required]
- field name: constr(max_length=200) [Required]
A short, descriptive name for the detector.
- Constraints:
max_length = 200
- field patience_time: confloat(ge=0.0, le=3600.0) = 30.0
How long Groundlight will attempt to generate a confident prediction
- Constraints:
ge = 0.0
le = 3600.0
- field query: str [Required]
A question about the image.
- field status: StatusEnum | BlankEnum | None = None
- field type: DetectorTypeEnum [Required]
The type of this object.
- model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- pydantic model model.ImageQuery
Spec for serializing a image-query object in the public API.
Show JSON schema
{ "title": "ImageQuery", "description": "Spec for serializing a image-query object in the public API.", "type": "object", "properties": { "metadata": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "description": "Metadata about the image query.", "title": "Metadata" }, "id": { "description": "A unique ID for this object.", "title": "Id", "type": "string" }, "type": { "$ref": "#/$defs/ImageQueryTypeEnum", "description": "The type of this object." }, "created_at": { "description": "When was this detector created?", "format": "date-time", "title": "Created At", "type": "string" }, "query": { "description": "A question about the image.", "title": "Query", "type": "string" }, "detector_id": { "description": "Which detector was used on this image query?", "title": "Detector Id", "type": "string" }, "result_type": { "$ref": "#/$defs/ResultTypeEnum", "description": "What type of result are we returning?" }, "result": { "anyOf": [ { "$ref": "#/$defs/BinaryClassificationResult" }, { "$ref": "#/$defs/CountingResult" }, { "$ref": "#/$defs/MultiClassificationResult" }, { "type": "null" } ], "title": "Result" }, "patience_time": { "description": "How long to wait for a confident response.", "title": "Patience Time", "type": "number" }, "confidence_threshold": { "description": "Min confidence needed to accept the response of the image query.", "title": "Confidence Threshold", "type": "number" }, "rois": { "anyOf": [ { "items": { "$ref": "#/$defs/ROI" }, "type": "array" }, { "type": "null" } ], "description": "An array of regions of interest (bounding boxes) collected on image", "title": "Rois" }, "text": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "A text field on image query.", "title": "Text" } }, "$defs": { "BBoxGeometry": { "description": "Mixin for serializers to handle data in the StrictBaseModel format", "properties": { "left": { "title": "Left", "type": "number" }, "top": { "title": "Top", "type": "number" }, "right": { "title": "Right", "type": "number" }, "bottom": { "title": "Bottom", "type": "number" }, "x": { "title": "X", "type": "number" }, "y": { "title": "Y", "type": "number" } }, "required": [ "left", "top", "right", "bottom", "x", "y" ], "title": "BBoxGeometry", "type": "object" }, "BinaryClassificationResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null, "description": "Source is optional to support edge v0.2" }, "label": { "$ref": "#/$defs/Label" } }, "required": [ "label" ], "title": "BinaryClassificationResult", "type": "object" }, "CountingResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null, "description": "Source is optional to support edge v0.2" }, "count": { "title": "Count", "type": "integer" }, "greater_than_max": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Greater Than Max" } }, "required": [ "count" ], "title": "CountingResult", "type": "object" }, "ImageQueryTypeEnum": { "const": "image_query", "enum": [ "image_query" ], "title": "ImageQueryTypeEnum", "type": "string" }, "Label": { "enum": [ "YES", "NO", "UNCLEAR" ], "title": "Label", "type": "string" }, "MultiClassificationResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null, "description": "Source is optional to support edge v0.2" }, "label": { "title": "Label", "type": "string" } }, "required": [ "label" ], "title": "MultiClassificationResult", "type": "object" }, "ROI": { "description": "Mixin for serializers to handle data in the StrictBaseModel format", "properties": { "label": { "description": "The label of the bounding box.", "title": "Label", "type": "string" }, "score": { "description": "The confidence of the bounding box.", "title": "Score", "type": "number" }, "geometry": { "$ref": "#/$defs/BBoxGeometry" } }, "required": [ "label", "score", "geometry" ], "title": "ROI", "type": "object" }, "ResultTypeEnum": { "enum": [ "binary_classification", "counting", "multi_classification" ], "title": "ResultTypeEnum", "type": "string" }, "Source": { "description": "Source is optional to support edge v0.2", "enum": [ "STILL_PROCESSING", "CLOUD", "USER", "CLOUD_ENSEMBLE", "ALGORITHM" ], "title": "Source", "type": "string" } }, "required": [ "metadata", "id", "type", "created_at", "query", "detector_id", "result_type", "result", "patience_time", "confidence_threshold", "rois", "text" ] }
- Fields:
- field confidence_threshold: float [Required]
Min confidence needed to accept the response of the image query.
- field created_at: datetime [Required]
When was this detector created?
- field detector_id: str [Required]
Which detector was used on this image query?
- field id: str [Required]
A unique ID for this object.
- field metadata: Dict[str, Any] | None [Required]
Metadata about the image query.
- field patience_time: float [Required]
How long to wait for a confident response.
- field query: str [Required]
A question about the image.
- field result: BinaryClassificationResult | CountingResult | MultiClassificationResult | None [Required]
- field result_type: ResultTypeEnum [Required]
What type of result are we returning?
- field rois: List[ROI] | None [Required]
An array of regions of interest (bounding boxes) collected on image
- field text: str | None [Required]
A text field on image query.
- field type: ImageQueryTypeEnum [Required]
The type of this object.
- model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- pydantic model model.PaginatedDetectorList
Show JSON schema
{ "title": "PaginatedDetectorList", "type": "object", "properties": { "count": { "example": 123, "title": "Count", "type": "integer" }, "next": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=4", "title": "Next" }, "previous": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=2", "title": "Previous" }, "results": { "items": { "$ref": "#/$defs/Detector" }, "title": "Results", "type": "array" } }, "$defs": { "BlankEnum": { "const": "", "enum": [ "" ], "title": "BlankEnum", "type": "string" }, "Detector": { "description": "Spec for serializing a detector object in the public API.", "properties": { "id": { "description": "A unique ID for this object.", "title": "Id", "type": "string" }, "type": { "$ref": "#/$defs/DetectorTypeEnum", "description": "The type of this object." }, "created_at": { "description": "When this detector was created.", "format": "date-time", "title": "Created At", "type": "string" }, "name": { "description": "A short, descriptive name for the detector.", "maxLength": 200, "title": "Name", "type": "string" }, "query": { "description": "A question about the image.", "title": "Query", "type": "string" }, "group_name": { "description": "Which group should this detector be part of?", "title": "Group Name", "type": "string" }, "confidence_threshold": { "default": 0.9, "description": "If the detector's prediction is below this confidence threshold, send the image query for human review.", "maximum": 1.0, "minimum": 0.0, "title": "Confidence Threshold", "type": "number" }, "patience_time": { "default": 30.0, "description": "How long Groundlight will attempt to generate a confident prediction", "maximum": 3600.0, "minimum": 0.0, "title": "Patience Time", "type": "number" }, "metadata": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "description": "Metadata about the detector.", "title": "Metadata" }, "mode": { "$ref": "#/$defs/ModeEnum" }, "mode_configuration": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "title": "Mode Configuration" }, "status": { "anyOf": [ { "$ref": "#/$defs/StatusEnum" }, { "$ref": "#/$defs/BlankEnum" }, { "type": "null" } ], "default": null, "title": "Status" }, "escalation_type": { "anyOf": [ { "$ref": "#/$defs/EscalationTypeEnum" }, { "type": "null" } ], "default": null, "description": "Category that define internal proccess for labeling image queries\n\n* `STANDARD` - STANDARD\n* `NO_HUMAN_LABELING` - NO_HUMAN_LABELING" } }, "required": [ "id", "type", "created_at", "name", "query", "group_name", "metadata", "mode", "mode_configuration" ], "title": "Detector", "type": "object" }, "DetectorTypeEnum": { "const": "detector", "enum": [ "detector" ], "title": "DetectorTypeEnum", "type": "string" }, "EscalationTypeEnum": { "description": "* `STANDARD` - STANDARD\n* `NO_HUMAN_LABELING` - NO_HUMAN_LABELING", "enum": [ "STANDARD", "NO_HUMAN_LABELING" ], "title": "EscalationTypeEnum", "type": "string" }, "ModeEnum": { "enum": [ "BINARY", "COUNT", "MULTI_CLASS" ], "title": "ModeEnum", "type": "string" }, "StatusEnum": { "description": "* `ON` - ON\n* `OFF` - OFF", "enum": [ "ON", "OFF" ], "title": "StatusEnum", "type": "string" } }, "required": [ "count", "results" ] }
- Fields:
- field count: int [Required]
- field next: AnyUrl | None = None
- field previous: AnyUrl | None = None
- model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- pydantic model model.PaginatedImageQueryList
Show JSON schema
{ "title": "PaginatedImageQueryList", "type": "object", "properties": { "count": { "example": 123, "title": "Count", "type": "integer" }, "next": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=4", "title": "Next" }, "previous": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=2", "title": "Previous" }, "results": { "items": { "$ref": "#/$defs/ImageQuery" }, "title": "Results", "type": "array" } }, "$defs": { "BBoxGeometry": { "description": "Mixin for serializers to handle data in the StrictBaseModel format", "properties": { "left": { "title": "Left", "type": "number" }, "top": { "title": "Top", "type": "number" }, "right": { "title": "Right", "type": "number" }, "bottom": { "title": "Bottom", "type": "number" }, "x": { "title": "X", "type": "number" }, "y": { "title": "Y", "type": "number" } }, "required": [ "left", "top", "right", "bottom", "x", "y" ], "title": "BBoxGeometry", "type": "object" }, "BinaryClassificationResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null, "description": "Source is optional to support edge v0.2" }, "label": { "$ref": "#/$defs/Label" } }, "required": [ "label" ], "title": "BinaryClassificationResult", "type": "object" }, "CountingResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null, "description": "Source is optional to support edge v0.2" }, "count": { "title": "Count", "type": "integer" }, "greater_than_max": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Greater Than Max" } }, "required": [ "count" ], "title": "CountingResult", "type": "object" }, "ImageQuery": { "description": "Spec for serializing a image-query object in the public API.", "properties": { "metadata": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "description": "Metadata about the image query.", "title": "Metadata" }, "id": { "description": "A unique ID for this object.", "title": "Id", "type": "string" }, "type": { "$ref": "#/$defs/ImageQueryTypeEnum", "description": "The type of this object." }, "created_at": { "description": "When was this detector created?", "format": "date-time", "title": "Created At", "type": "string" }, "query": { "description": "A question about the image.", "title": "Query", "type": "string" }, "detector_id": { "description": "Which detector was used on this image query?", "title": "Detector Id", "type": "string" }, "result_type": { "$ref": "#/$defs/ResultTypeEnum", "description": "What type of result are we returning?" }, "result": { "anyOf": [ { "$ref": "#/$defs/BinaryClassificationResult" }, { "$ref": "#/$defs/CountingResult" }, { "$ref": "#/$defs/MultiClassificationResult" }, { "type": "null" } ], "title": "Result" }, "patience_time": { "description": "How long to wait for a confident response.", "title": "Patience Time", "type": "number" }, "confidence_threshold": { "description": "Min confidence needed to accept the response of the image query.", "title": "Confidence Threshold", "type": "number" }, "rois": { "anyOf": [ { "items": { "$ref": "#/$defs/ROI" }, "type": "array" }, { "type": "null" } ], "description": "An array of regions of interest (bounding boxes) collected on image", "title": "Rois" }, "text": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "A text field on image query.", "title": "Text" } }, "required": [ "metadata", "id", "type", "created_at", "query", "detector_id", "result_type", "result", "patience_time", "confidence_threshold", "rois", "text" ], "title": "ImageQuery", "type": "object" }, "ImageQueryTypeEnum": { "const": "image_query", "enum": [ "image_query" ], "title": "ImageQueryTypeEnum", "type": "string" }, "Label": { "enum": [ "YES", "NO", "UNCLEAR" ], "title": "Label", "type": "string" }, "MultiClassificationResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null, "description": "Source is optional to support edge v0.2" }, "label": { "title": "Label", "type": "string" } }, "required": [ "label" ], "title": "MultiClassificationResult", "type": "object" }, "ROI": { "description": "Mixin for serializers to handle data in the StrictBaseModel format", "properties": { "label": { "description": "The label of the bounding box.", "title": "Label", "type": "string" }, "score": { "description": "The confidence of the bounding box.", "title": "Score", "type": "number" }, "geometry": { "$ref": "#/$defs/BBoxGeometry" } }, "required": [ "label", "score", "geometry" ], "title": "ROI", "type": "object" }, "ResultTypeEnum": { "enum": [ "binary_classification", "counting", "multi_classification" ], "title": "ResultTypeEnum", "type": "string" }, "Source": { "description": "Source is optional to support edge v0.2", "enum": [ "STILL_PROCESSING", "CLOUD", "USER", "CLOUD_ENSEMBLE", "ALGORITHM" ], "title": "Source", "type": "string" } }, "required": [ "count", "results" ] }
- Fields:
- field count: int [Required]
- field next: AnyUrl | None = None
- field previous: AnyUrl | None = None
- field results: List[ImageQuery] [Required]
- model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- pydantic model model.Rule
Show JSON schema
{ "title": "Rule", "type": "object", "properties": { "id": { "title": "Id", "type": "integer" }, "detector_id": { "title": "Detector Id", "type": "string" }, "detector_name": { "title": "Detector Name", "type": "string" }, "name": { "maxLength": 44, "title": "Name", "type": "string" }, "enabled": { "default": true, "title": "Enabled", "type": "boolean" }, "snooze_time_enabled": { "default": false, "title": "Snooze Time Enabled", "type": "boolean" }, "snooze_time_value": { "default": 0, "minimum": 0, "title": "Snooze Time Value", "type": "integer" }, "snooze_time_unit": { "$ref": "#/$defs/SnoozeTimeUnitEnum", "default": "DAYS" }, "human_review_required": { "default": false, "title": "Human Review Required", "type": "boolean" }, "condition": { "$ref": "#/$defs/Condition" }, "action": { "anyOf": [ { "$ref": "#/$defs/Action" }, { "$ref": "#/$defs/ActionList" } ], "title": "Action" } }, "$defs": { "Action": { "properties": { "channel": { "$ref": "#/$defs/ChannelEnum" }, "recipient": { "title": "Recipient", "type": "string" }, "include_image": { "title": "Include Image", "type": "boolean" } }, "required": [ "channel", "recipient", "include_image" ], "title": "Action", "type": "object" }, "ActionList": { "items": { "$ref": "#/$defs/Action" }, "title": "ActionList", "type": "array" }, "ChannelEnum": { "enum": [ "TEXT", "EMAIL" ], "title": "ChannelEnum", "type": "string" }, "Condition": { "properties": { "verb": { "$ref": "#/$defs/VerbEnum" }, "parameters": { "title": "Parameters", "type": "object" } }, "required": [ "verb", "parameters" ], "title": "Condition", "type": "object" }, "SnoozeTimeUnitEnum": { "description": "* `DAYS` - DAYS\n* `HOURS` - HOURS\n* `MINUTES` - MINUTES\n* `SECONDS` - SECONDS", "enum": [ "DAYS", "HOURS", "MINUTES", "SECONDS" ], "title": "SnoozeTimeUnitEnum", "type": "string" }, "VerbEnum": { "description": "* `ANSWERED_CONSECUTIVELY` - ANSWERED_CONSECUTIVELY\n* `ANSWERED_WITHIN_TIME` - ANSWERED_WITHIN_TIME\n* `CHANGED_TO` - CHANGED_TO\n* `NO_CHANGE` - NO_CHANGE\n* `NO_QUERIES` - NO_QUERIES", "enum": [ "ANSWERED_CONSECUTIVELY", "ANSWERED_WITHIN_TIME", "CHANGED_TO", "NO_CHANGE", "NO_QUERIES" ], "title": "VerbEnum", "type": "string" } }, "required": [ "id", "detector_id", "detector_name", "name", "condition", "action" ] }
- Fields:
- field action: Action | ActionList [Required]
- field condition: Condition [Required]
- field detector_id: str [Required]
- field detector_name: str [Required]
- field enabled: bool = True
- field human_review_required: bool = False
- field id: int [Required]
- field name: constr(max_length=44) [Required]
- Constraints:
max_length = 44
- field snooze_time_enabled: bool = False
- field snooze_time_unit: SnoozeTimeUnitEnum = 'DAYS'
- field snooze_time_value: conint(ge=0) = 0
- Constraints:
ge = 0
- model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.
- pydantic model model.PaginatedRuleList
Show JSON schema
{ "title": "PaginatedRuleList", "type": "object", "properties": { "count": { "example": 123, "title": "Count", "type": "integer" }, "next": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=4", "title": "Next" }, "previous": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=2", "title": "Previous" }, "results": { "items": { "$ref": "#/$defs/Rule" }, "title": "Results", "type": "array" } }, "$defs": { "Action": { "properties": { "channel": { "$ref": "#/$defs/ChannelEnum" }, "recipient": { "title": "Recipient", "type": "string" }, "include_image": { "title": "Include Image", "type": "boolean" } }, "required": [ "channel", "recipient", "include_image" ], "title": "Action", "type": "object" }, "ActionList": { "items": { "$ref": "#/$defs/Action" }, "title": "ActionList", "type": "array" }, "ChannelEnum": { "enum": [ "TEXT", "EMAIL" ], "title": "ChannelEnum", "type": "string" }, "Condition": { "properties": { "verb": { "$ref": "#/$defs/VerbEnum" }, "parameters": { "title": "Parameters", "type": "object" } }, "required": [ "verb", "parameters" ], "title": "Condition", "type": "object" }, "Rule": { "properties": { "id": { "title": "Id", "type": "integer" }, "detector_id": { "title": "Detector Id", "type": "string" }, "detector_name": { "title": "Detector Name", "type": "string" }, "name": { "maxLength": 44, "title": "Name", "type": "string" }, "enabled": { "default": true, "title": "Enabled", "type": "boolean" }, "snooze_time_enabled": { "default": false, "title": "Snooze Time Enabled", "type": "boolean" }, "snooze_time_value": { "default": 0, "minimum": 0, "title": "Snooze Time Value", "type": "integer" }, "snooze_time_unit": { "$ref": "#/$defs/SnoozeTimeUnitEnum", "default": "DAYS" }, "human_review_required": { "default": false, "title": "Human Review Required", "type": "boolean" }, "condition": { "$ref": "#/$defs/Condition" }, "action": { "anyOf": [ { "$ref": "#/$defs/Action" }, { "$ref": "#/$defs/ActionList" } ], "title": "Action" } }, "required": [ "id", "detector_id", "detector_name", "name", "condition", "action" ], "title": "Rule", "type": "object" }, "SnoozeTimeUnitEnum": { "description": "* `DAYS` - DAYS\n* `HOURS` - HOURS\n* `MINUTES` - MINUTES\n* `SECONDS` - SECONDS", "enum": [ "DAYS", "HOURS", "MINUTES", "SECONDS" ], "title": "SnoozeTimeUnitEnum", "type": "string" }, "VerbEnum": { "description": "* `ANSWERED_CONSECUTIVELY` - ANSWERED_CONSECUTIVELY\n* `ANSWERED_WITHIN_TIME` - ANSWERED_WITHIN_TIME\n* `CHANGED_TO` - CHANGED_TO\n* `NO_CHANGE` - NO_CHANGE\n* `NO_QUERIES` - NO_QUERIES", "enum": [ "ANSWERED_CONSECUTIVELY", "ANSWERED_WITHIN_TIME", "CHANGED_TO", "NO_CHANGE", "NO_QUERIES" ], "title": "VerbEnum", "type": "string" } }, "required": [ "count", "results" ] }
- Fields:
- field count: int [Required]
- field next: AnyUrl | None = None
- field previous: AnyUrl | None = None
- model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}
A dictionary of computed field names and their corresponding ComputedFieldInfo objects.