SDK Client
- class groundlight.Groundlight(endpoint: str | None = None, api_token: str | None = None, disable_tls_verification: bool | None = None)
Client for accessing the Groundlight cloud service. Provides methods to create visual detectors, submit images for analysis, and retrieve predictions.
The API token (auth) is specified through the GROUNDLIGHT_API_TOKEN environment variable by default.
Example usage:
gl = Groundlight() detector = gl.get_or_create_detector( name="door_detector", query="Is the door open?", confidence_threshold=0.9 ) # Submit image and get prediction image_query = gl.submit_image_query( detector=detector, image="path/to/image.jpg", wait=30.0 ) print(f"Answer: {image_query.result.label}") # Async submission with human review image_query = gl.ask_async( detector=detector, image="path/to/image.jpg", human_review="ALWAYS" ) # Later, get the result image_query = gl.wait_for_confident_result( image_query=image_query, confidence_threshold=0.95, timeout_sec=60.0 )
- Parameters:
endpoint (str | None) – Optional custom API endpoint URL. If not specified, uses the default Groundlight endpoint.
api_token (str | None) – Authentication token for API access. If not provided, will attempt to read from the “GROUNDLIGHT_API_TOKEN” environment variable.
disable_tls_verification (bool | None) –
If True, disables SSL/TLS certificate verification for API calls. When not specified, checks the “DISABLE_TLS_VERIFY” environment variable (1=disable, 0=enable). Certificate verification is enabled by default.
Warning: Only disable verification when connecting to a Groundlight Edge Endpoint using self-signed certificates. For security, always keep verification enabled when using the Groundlight cloud service.
- Returns:
Groundlight client instance
- __init__(endpoint: str | None = None, api_token: str | None = None, disable_tls_verification: bool | None = None)
Initialize a new Groundlight client instance.
- Parameters:
endpoint (str | None) – Optional custom API endpoint URL. If not specified, uses the default Groundlight endpoint.
api_token (str | None) – Authentication token for API access. If not provided, will attempt to read from the “GROUNDLIGHT_API_TOKEN” environment variable.
disable_tls_verification (bool | None) –
If True, disables SSL/TLS certificate verification for API calls. When not specified, checks the “DISABLE_TLS_VERIFY” environment variable (1=disable, 0=enable). Certificate verification is enabled by default.
Warning: Only disable verification when connecting to a Groundlight Edge Endpoint using self-signed certificates. For security, always keep verification enabled when using the Groundlight cloud service.
- Returns:
Groundlight client
- add_label(image_query: ImageQuery | str, label: Label | int | str, rois: List[ROI] | str | None = None)
Provide a new label (annotation) for an image query. This is used to provide ground-truth labels for training detectors, or to correct the results of detectors.
Example usage:
gl = Groundlight() # Using an ImageQuery object image_query = gl.ask_ml(detector_id, image_data) gl.add_label(image_query, "YES") # Using an image query ID string directly gl.add_label("iq_abc123", "NO") # With regions of interest (ROIs) rois = [ROI(x=100, y=100, width=50, height=50)] gl.add_label(image_query, "YES", rois=rois)
- Parameters:
image_query (ImageQuery | str) – Either an ImageQuery object (returned from methods like ask_ml) or an image query ID string starting with “iq_”.
label (Label | int | str) – The label value to assign, typically “YES” or “NO” for binary classification detectors. For multi-class detectors, use one of the defined class names.
rois (List[ROI] | str | None) – Optional list of ROI objects defining regions of interest in the image. Each ROI specifies a bounding box with x, y coordinates and width, height.
- Returns:
None
- ask_async(detector: Detector | str, image: str | bytes | Image | BytesIO | BufferedReader | UnavailableModule, patience_time: float | None = None, confidence_threshold: float | None = None, human_review: str | None = None, metadata: dict | str | None = None, inspection_id: str | None = None) ImageQuery
Submit an image query asynchronously. This is equivalent to calling submit_image_query with want_async=True and wait=0.
Note
The returned ImageQuery will have result=None since the prediction is computed asynchronously. Use
wait_for_confident_result()
,wait_for_ml_result()
, orget_image_query()
to retrieve the result later.Example usage:
gl = Groundlight() # Submit query asynchronously image_query = gl.ask_async( detector="det_12345", image="path/to/image.jpg", confidence_threshold=0.9, human_review="ALWAYS" ) print(f"Query submitted with ID: {query.id}") # Later, retrieve the result image_query = gl.wait_for_confident_result( image_query=query, confidence_threshold=0.9, timeout_sec=60.0 ) print(f"Answer: {image_query.result.label}") # Alternatively, check if result is ready without blocking image_query = gl.get_image_query(image_query.id) if image_query.result: print(f"Result ready: {image_query.result.label}") else: print("Still processing...")
- Parameters:
detector (Detector | str) – the Detector object, or string id of a detector like det_12345
image (str | bytes | Image | BytesIO | BufferedReader | UnavailableModule) –
The image, in several possible formats: - filename (string) of a jpeg file - byte array or BytesIO or BufferedReader with jpeg bytes - numpy array with values 0-255 and dimensions (H,W,3) in BGR order
(Note OpenCV uses BGR not RGB. img[:, :, ::-1] will reverse the channels)
PIL Image: Any binary format must be JPEG-encoded already. Any pixel format will get converted to JPEG at high quality before sending to service.
patience_time (float | None) – How long to wait (in seconds) for a confident answer for this image query. The longer the patience_time, the more likely Groundlight will arrive at a confident answer. This is a soft server-side timeout. If not set, use the detector’s patience_time.
confidence_threshold (float | None) – The confidence threshold to wait for. If not set, use the detector’s confidence threshold.
human_review (str | None) – If None or DEFAULT, send the image query for human review only if the ML prediction is not confident. If set to ALWAYS, always send the image query for human review. If set to NEVER, never send the image query for human review.
metadata (dict | str | None) – A dictionary or JSON string of custom key/value metadata to associate with the image query (limited to 1KB). You can retrieve this metadata later by calling get_image_query().
inspection_id (str | None) – Most users will omit this. For accounts with Inspection Reports enabled, this is the ID of the inspection to associate with the image query.
- Returns:
ImageQuery with result set to None (result will be computed asynchronously)
- Raises:
ApiTokenError – If API token is invalid
GroundlightClientError – For other API errors
- Return type:
See also
wait_for_confident_result()
for waiting until a confident result is availablewait_for_ml_result()
for waiting until the first ML result is availableget_image_query()
for checking if a result is ready without blocking
- ask_confident(detector: Detector | str, image: str | bytes | Image | BytesIO | BufferedReader | UnavailableModule, confidence_threshold: float | None = None, wait: float | None = None, metadata: dict | str | None = None, inspection_id: str | None = None) ImageQuery
Evaluates an image with Groundlight, waiting until an answer above the confidence threshold is reached or the wait period has passed.
Example usage:
gl = Groundlight() image_query = gl.ask_confident( detector="det_12345", image="path/to/image.jpg", confidence_threshold=0.9, wait=30.0 ) if image_query.result.confidence >= 0.9: print(f"Confident answer: {image_query.result.label}") else: print("Could not get confident answer within timeout")
- Parameters:
detector (Detector | str) – the Detector object, or string id of a detector like det_12345
image (str | bytes | Image | BytesIO | BufferedReader | UnavailableModule) –
The image, in several possible formats: - filename (string) of a jpeg file - byte array or BytesIO or BufferedReader with jpeg bytes - numpy array with values 0-255 and dimensions (H,W,3) in BGR order
(Note OpenCV uses BGR not RGB. img[:, :, ::-1] will reverse the channels)
PIL Image
Any binary format must be JPEG-encoded already. Any pixel format will get converted to JPEG at high quality before sending to service.
confidence_threshold (float | None) – The confidence threshold to wait for. If not set, use the detector’s confidence threshold.
wait (float | None) – How long to wait (in seconds) for a confident answer.
metadata (dict | str | None) – A dictionary or JSON string of custom key/value metadata to associate with the image query (limited to 1KB). You can retrieve this metadata later by calling get_image_query().
inspection_id (str | None) – Most users will omit this. For accounts with Inspection Reports enabled, this is the ID of the inspection to associate with the image query.
- Returns:
ImageQuery containing the prediction result
- Raises:
ApiTokenError – If API token is invalid
GroundlightClientError – For other API errors
- Return type:
See also
ask_ml()
for getting the first ML prediction without waiting an answer above the confidence thresholdask_async()
for submitting queries asynchronouslysubmit_image_query()
for submitting queries with more control over the process
- ask_ml(detector: Detector | str, image: str | bytes | Image | BytesIO | BufferedReader | UnavailableModule, wait: float | None = None, metadata: dict | str | None = None, inspection_id: str | None = None) ImageQuery
Evaluates an image with Groundlight, getting the first ML prediction without waiting for high confidence or human review.
Example usage:
gl = Groundlight() detector = gl.get_detector("det_12345") # or create one with create_detector() # Get quick ML prediction for an image image_query = gl.ask_ml(detector, "path/to/image.jpg") # The image_query may have low confidence since we're never waiting for human review print(f"Quick ML prediction: {image_query.result.label}") print(f"Confidence: {image_query.result.confidence}") # You can also pass metadata to track additional information image_query = gl.ask_ml( detector=detector, image="path/to/image.jpg", metadata={"camera_id": "front_door", "timestamp": "2023-01-01T12:00:00Z"} )
- Parameters:
detector (Detector | str) – the Detector object, or string id of a detector like det_12345
image (str | bytes | Image | BytesIO | BufferedReader | UnavailableModule) –
The image, in several possible formats: - filename (string) of a jpeg file - byte array or BytesIO or BufferedReader with jpeg bytes - numpy array with values 0-255 and dimensions (H,W,3) in BGR order
(Note OpenCV uses BGR not RGB. img[:, :, ::-1] will reverse the channels)
PIL Image
Any binary format must be JPEG-encoded already. Any pixel format will get converted to JPEG at high quality before sending to service.
wait (float | None) – How long to wait (in seconds) for any ML prediction. Default is 30.0 seconds.
metadata (dict | str | None) – A dictionary or JSON string of custom key/value metadata to associate with the image query (limited to 1KB). You can retrieve this metadata later by calling get_image_query().
inspection_id (str | None) – Most users will omit this. For accounts with Inspection Reports enabled, this is the ID of the inspection to associate with the image query.
- Returns:
ImageQuery containing the ML prediction
- Raises:
ApiTokenError – If API token is invalid
GroundlightClientError – For other API errors
- Return type:
Note
This method returns the first available ML prediction, which may have low confidence. For answers above a configured confidence_threshold, use
ask_confident()
instead.See also
ask_confident()
for waiting until a high-confidence prediction is availableask_async()
for submitting queries asynchronously
- create_detector(name: str, query: str, *, group_name: str | None = None, confidence_threshold: float | None = None, patience_time: float | None = None, pipeline_config: str | None = None, metadata: dict | str | None = None) Detector
Create a new Detector with a given name and query.
Example usage:
gl = Groundlight() # Create a basic binary detector detector = gl.create_detector( name="dog-on-couch-detector", query="Is there a dog on the couch?", confidence_threshold=0.9, patience_time=30.0 ) # Create a detector with metadata detector = gl.create_detector( name="door-monitor", query="Is the door open?", metadata={"location": "front-entrance", "building": "HQ"}, confidence_threshold=0.95 ) # Create a detector in a specific group detector = gl.create_detector( name="vehicle-counter", query="How many vehicles are in the parking lot?", group_name="parking-monitoring", patience_time=60.0 )
- Parameters:
name (str) – A short, descriptive name for the detector. This name should be unique within your account and help identify the detector’s purpose.
query (str) – The question that the detector will answer about images. For binary classification, this should be a yes/no question (e.g. “Is there a person in the image?”).
group_name (str | None) – Optional name of a group to organize related detectors together. If not specified, the detector will be placed in the default group.
confidence_threshold (float | None) – A value between 0.5 and 1 that sets the minimum confidence level required for the ML model’s predictions. If confidence is below this threshold, the query may be sent for human review.
patience_time (float | None) – The maximum time in seconds that Groundlight will attempt to generate a confident prediction before falling back to human review. Defaults to 30 seconds.
pipeline_config (str | None) – Advanced usage only. Configuration string needed to instantiate a specific prediction pipeline for this detector.
metadata (dict | str | None) – A dictionary or JSON string containing custom key/value pairs to associate with the detector (limited to 1KB). This metadata can be used to store additional information like location, purpose, or related system IDs. You can retrieve this metadata later by calling get_detector().
- Returns:
The created Detector object
- Return type:
- get_detector(id: str | Detector) Detector
Get a Detector by id.
Example usage:
gl = Groundlight() detector = gl.get_detector(id="det_12345") print(detector)
- get_detector_by_name(name: str) Detector
Get a Detector by name.
Example usage:
gl = Groundlight() detector = gl.get_detector_by_name(name="door_detector") print(detector)
- Parameters:
name (str) – the detector name
- Returns:
Detector
- Return type:
- get_image_query(id: str) ImageQuery
Get an ImageQuery by its ID. This is useful for retrieving the status and results of a previously submitted query.
Example Usage:
gl = Groundlight()
# Get an existing image query by ID image_query = gl.get_image_query(“iq_abc123”)
# Get the result if available if image_query.result is not None:
print(f”Answer: {image_query.result.label}”) print(f”Source: {image_query.result.source}”) print(f”Confidence: {image_query.result.confidence}”) # e.g. 0.98
- Parameters:
id (str) – The ImageQuery ID to look up. This ID is returned when submitting a new ImageQuery.
- Returns:
ImageQuery object containing the query details and results
- Return type:
- get_or_create_detector(name: str, query: str, *, group_name: str | None = None, confidence_threshold: float | None = None, pipeline_config: str | None = None, metadata: dict | str | None = None) Detector
Tries to look up the Detector by name. If a Detector with that name, query, and confidence exists, return it. Otherwise, create a Detector with the specified query and config.
Example usage:
gl = Groundlight() detector = gl.get_or_create_detector( name="service-counter-usage", query="Is there a customer at the service counter?", group_name="retail-analytics", confidence_threshold=0.95, metadata={"location": "store-123"} )
- Parameters:
name (str) – A short, descriptive name for the detector. This name should be unique within your account and help identify the detector’s purpose.
query (str) – The question that the detector will answer about images. For binary classification, this should be a yes/no question (e.g. “Is there a person in the image?”).
group_name (str | None) – Optional name of a group to organize related detectors together. If not specified, the detector will be placed in the default group.
confidence_threshold (float | None) – A value between 0.5 and 1 that sets the minimum confidence level required for the ML model’s predictions. If confidence is below this threshold, the query may be sent for human review.
pipeline_config (str | None) – Advanced usage only. Configuration string needed to instantiate a specific prediction pipeline for this detector.
metadata (dict | str | None) – A dictionary or JSON string containing custom key/value pairs to associate with the detector (limited to 1KB). This metadata can be used to store additional information like location, purpose, or related system IDs. You can retrieve this metadata later by calling get_detector().
- Returns:
Detector with the specified configuration
- Raises:
ValueError – If an existing detector is found but has different configuration
ApiTokenError – If API token is invalid
GroundlightClientError – For other API errors
- Return type:
Note
If a detector with the given name exists, this method verifies that its configuration (query, group_name, etc.) matches what was requested. If there are any mismatches, it raises ValueError rather than modifying the existing detector.
- list_detectors(page: int = 1, page_size: int = 10) PaginatedDetectorList
Retrieve a paginated list of detectors associated with your account.
Example usage:
gl = Groundlight() # Get first page of 5 detectors detectors = gl.list_detectors(page=1, page_size=5) for detector in detectors.items: print(detector)
- Parameters:
page (int) – The page number to retrieve (1-based indexing). Use this parameter to navigate through multiple pages of detectors.
page_size (int) – The number of detectors to return per page.
- Returns:
PaginatedDetectorList containing the requested page of detectors and pagination metadata
- Return type:
- list_image_queries(page: int = 1, page_size: int = 10, detector_id: str | None = None) PaginatedImageQueryList
List all image queries associated with your account, with pagination support.
Example Usage:
gl = Groundlight() # Get first page of 10 image queries queries = gl.list_image_queries(page=1, page_size=10) # Access results for query in queries.results: print(f"Query ID: {query.id}") print(f"Result: {query.result.label if query.result else 'No result yet'}")
- Parameters:
page (int) – The page number to retrieve (1-based indexing). Use this parameter to navigate through multiple pages of image queries.
page_size (int) – Number of image queries to return per page. Default is 10.
detector_id (str | None)
- Returns:
PaginatedImageQueryList containing the requested page of image queries and pagination metadata like total count and links to next/previous pages.
- Return type:
- start_inspection() str
NOTE: For users with Inspection Reports enabled only. Starts an inspection report and returns the id of the inspection.
- Returns:
The unique identifier of the inspection.
- Return type:
str
- stop_inspection(inspection_id: str) str
NOTE: For users with Inspection Reports enabled only. Stops an inspection and raises an exception if the response from the server indicates that the inspection was not successfully stopped.
- Parameters:
inspection_id (str) – The unique identifier of the inspection.
- Returns:
“PASS” or “FAIL” depending on the result of the inspection.
- Return type:
str
- submit_image_query(detector: Detector | str, image: str | bytes | Image | BytesIO | BufferedReader | UnavailableModule, wait: float | None = None, patience_time: float | None = None, confidence_threshold: float | None = None, human_review: str | None = None, want_async: bool = False, inspection_id: str | None = None, metadata: dict | str | None = None, image_query_id: str | None = None) ImageQuery
Evaluates an image with Groundlight. This is the core method for getting predictions about images.
Example Usage:
from groundlight import Groundlight from PIL import Image gl = Groundlight() det = gl.get_or_create_detector( name="parking-space", query="Is there a car in the leftmost parking space?" ) # Basic synchronous usage image = "path/to/image.jpg" image_query = gl.submit_image_query(detector=det, image=image) print(f"The answer is {image_query.result.label}") # Asynchronous usage with custom confidence image = Image.open("path/to/image.jpg") image_query = gl.submit_image_query( detector=det, image=image, wait=0, # Don't wait for result confidence_threshold=0.95, want_async=True ) print(f"Submitted image_query {image_query.id}") # With metadata and mandatory human review image_query = gl.submit_image_query( detector=det, image=image, metadata={"location": "entrance", "camera": "cam1"}, human_review="ALWAYS" )
Note
This method supports both synchronous and asynchronous workflows, configurable confidence thresholds, and optional human review.
See also
ask_confident()
for a simpler synchronous workflowask_async()
for a simpler asynchronous workflowask_ml()
for faster ML predictions without waiting for a confident answer- Parameters:
detector (Detector | str) – the Detector object, or string id of a detector like det_12345
image (str | bytes | Image | BytesIO | BufferedReader | UnavailableModule) –
The image, in several possible formats: - filename (string) of a jpeg file - byte array or BytesIO or BufferedReader with jpeg bytes - numpy array with values 0-255 and dimensions (H,W,3) in BGR order
(Note OpenCV uses BGR not RGB. img[:, :, ::-1] will reverse the channels)
- PIL Image: Any binary format must be JPEG-encoded already.
Any pixel format will get converted to JPEG at high quality before sending to service.
wait (float | None) – How long to poll (in seconds) for a confident answer. This is a client-side timeout. Default is 30.0. Set to 0 for async operation.
patience_time (float | None) – How long to wait (in seconds) for a confident answer for this image query. The longer the patience_time, the more likely Groundlight will arrive at a confident answer. Within patience_time, Groundlight will update ML predictions based on stronger findings, and, additionally, Groundlight will prioritize human review of the image query if necessary. This is a soft server-side timeout. If not set, use the detector’s patience_time.
confidence_threshold (float | None) – The confidence threshold to wait for. If not set, use the detector’s confidence threshold.
human_review (str | None) – If None or DEFAULT, send the image query for human review only if the ML prediction is not confident. If set to ALWAYS, always send the image query for human review. If set to NEVER, never send the image query for human review.
want_async (bool) – If True, return immediately without waiting for result. Must set wait=0 when using this option.
inspection_id (str | None) – Most users will omit this. For accounts with Inspection Reports enabled, this is the ID of the inspection to associate with the image query.
metadata (dict | str | None) – A dictionary or JSON string of custom key/value metadata to associate with the image query (limited to 1KB). You can retrieve this metadata later by calling get_image_query().
image_query_id (str | None) – The ID for the image query. This is to enable specific functionality and is not intended for general external use. If not set, a random ID will be generated.
- Returns:
ImageQuery with query details and result (if wait > 0)
- Raises:
ValueError – If wait > 0 when want_async=True
ApiTokenError – If API token is invalid
GroundlightClientError – For other API errors
- Return type:
- update_detector_confidence_threshold(detector: str | Detector, confidence_threshold: float) None
Updates the confidence threshold for the given detector
- Parameters:
detector (str | Detector) – the detector to update
confidence_threshold (float) – the new confidence threshold
- Returns:
None
- Return type:
None
- update_inspection_metadata(inspection_id: str, user_provided_key: str, user_provided_value: str) None
NOTE: For users with Inspection Reports enabled only. Add/update inspection metadata with the user_provided_key and user_provided_value.
- Parameters:
inspection_id (str) – The unique identifier of the inspection.
user_provided_key (str) – the key in the key/value pair for the inspection metadata.
user_provided_value (str) – the value in the key/value pair for the inspection metadata.
- Returns:
None
- Return type:
None
- wait_for_confident_result(image_query: ImageQuery | str, confidence_threshold: float | None = None, timeout_sec: float = 30.0) ImageQuery
Waits for an image query result’s confidence level to reach the specified confidence_threshold. Uses polling with exponential back-off to check for results.
Note
This method blocks until either: 1. A result with confidence >= confidence_threshold is available 2. The timeout_sec is reached 3. An error occurs
Example usage:
gl = Groundlight() query = gl.ask_async( detector="det_12345", image="path/to/image.jpg" ) try: result = gl.wait_for_confident_result( query, confidence_threshold=0.9, timeout_sec=60.0 ) print(f"Got confident answer: {result.result.label}") except TimeoutError: print("Timed out waiting for confident result")
- Parameters:
image_query (ImageQuery | str) – An ImageQuery object or query ID string to poll
confidence_threshold (float | None) – The confidence threshold to wait for. If not set, use the detector’s confidence threshold.
timeout_sec (float) – The maximum number of seconds to wait. Default is 30.0 seconds.
- Returns:
ImageQuery with confident result
- Raises:
TimeoutError – If no confident result is available within timeout_sec
ApiTokenError – If API token is invalid
GroundlightClientError – For other API errors
- Return type:
See also
ask_async()
for submitting queries asynchronouslyget_image_query()
for checking result status without blockingwait_for_ml_result()
for waiting until the first ML result is available
- wait_for_ml_result(image_query: ImageQuery | str, timeout_sec: float = 30.0) ImageQuery
Waits for the first ML result to be returned for an image query. Uses polling with exponential back-off to check for results.
Note
This method blocks until either: 1. An ML result is available 2. The timeout_sec is reached 3. An error occurs
Example usage:
gl = Groundlight() query = gl.ask_async( detector="det_12345", image="path/to/image.jpg" ) try: result = gl.wait_for_ml_result(query, timeout_sec=3.0) print(f"Got ML result: {result.result.label}") except TimeoutError: print("Timed out waiting for ML result")
- Parameters:
image_query (ImageQuery | str) – An ImageQuery object or ImageQuery ID string to poll
timeout_sec (float) – The maximum number of seconds to wait. Default is 30.0 seconds.
- Returns:
ImageQuery with ML result
- Raises:
TimeoutError – If no ML result is available within timeout_sec
ApiTokenError – If API token is invalid
GroundlightClientError – For other API errors
- Return type:
See also
ask_async()
for submitting queries asynchronouslyget_image_query()
for checking result status without blockingwait_for_confident_result()
for waiting until a confident result is available
- whoami() str
Return the username (email address) associated with the current API token.
This method verifies that the API token is valid and returns the email address of the authenticated user. It can be used to confirm that authentication is working correctly.
Example usage:
gl = Groundlight() username = gl.whoami() print(f"Authenticated as {username}")
- Returns:
The email address of the authenticated user
- Raises:
ApiTokenError – If the API token is invalid
GroundlightClientError – If there are connectivity issues with the Groundlight service
- Return type:
str
- class groundlight.ExperimentalApi(endpoint: str | None = None, api_token: str | None = None, disable_tls_verification: bool | None = None)
- Parameters:
endpoint (str | None)
api_token (str | None)
disable_tls_verification (bool | None)
- __init__(endpoint: str | None = None, api_token: str | None = None, disable_tls_verification: bool | None = None)
Constructs an experimental Groundlight client.
This client extends the base Groundlight client with additional experimental functionality that is still in development. Note that experimental features may undergo significant changes or be removed in future releases.
Example usage:
from groundlight import ExperimentalApi # Create an experimental API client gl = ExperimentalApi() # Create a notification rule rule = gl.create_rule( detector="door_detector", rule_name="Door Open Alert", channel="EMAIL", recipient="alerts@company.com", alert_on="CHANGED_TO", include_image=True, condition_parameters={"label": "YES"} ) # Create a detector group group = gl.create_detector_group( name="Security Detectors", description="Detectors monitoring security-related conditions", detectors=["door_detector", "motion_detector"] )
- Parameters:
endpoint (str | None) – Optional custom API endpoint URL. If not specified, uses the default Groundlight endpoint.
api_token (str | None) – Authentication token for API access. If not provided, will attempt to read from the “GROUNDLIGHT_API_TOKEN” environment variable.
disable_tls_verification (bool | None) –
If True, disables SSL/TLS certificate verification for API calls. When not specified, checks the “DISABLE_TLS_VERIFY” environment variable (1=disable, 0=enable). Certificate verification is enabled by default.
Warning: Only disable verification when connecting to a Groundlight Edge Endpoint using self-signed certificates. For security, always keep verification enabled when using the Groundlight cloud service.
- create_counting_detector(name: str, query: str, class_name: str, *, max_count: int | None = None, group_name: str | None = None, confidence_threshold: float | None = None, patience_time: float | None = None, pipeline_config: str | None = None, metadata: dict | str | None = None) Detector
Creates a counting detector that can count objects in images up to a specified maximum count.
Example usage:
gl = ExperimentalApi() # Create a detector that counts people up to 5 detector = gl.create_counting_detector( name="people_counter", query="How many people are in the image?", class_name="person", max_count=5, confidence_threshold=0.9, patience_time=30.0 ) # Use the detector to count people in an image image_query = gl.ask_ml(detector, "path/to/image.jpg") print(f"Counted {image_query.result.count} people") print(f"Confidence: {image_query.result.confidence}")
- Parameters:
name (str) – A short, descriptive name for the detector.
query (str) – A question about the count of an object in the image.
class_name (str) – The class name of the object to count.
max_count (int | None) – Maximum number of objects to count (default: 10)
group_name (str | None) – Optional name of a group to organize related detectors together.
confidence_threshold (float | None) – A value that sets the minimum confidence level required for the ML model’s predictions. If confidence is below this threshold, the query may be sent for human review.
patience_time (float | None) – The maximum time in seconds that Groundlight will attempt to generate a confident prediction before falling back to human review. Defaults to 30 seconds.
pipeline_config (str | None) – Advanced usage only. Configuration string needed to instantiate a specific prediction pipeline for this detector.
metadata (dict | str | None) – A dictionary or JSON string containing custom key/value pairs to associate with the detector (limited to 1KB). This metadata can be used to store additional information like location, purpose, or related system IDs. You can retrieve this metadata later by calling get_detector().
- Returns:
The created Detector object
- Return type:
- create_detector_group(name: str) DetectorGroup
Creates a detector group with the given name. A detector group allows you to organize related detectors together.
Note
You can specify a detector group when creating a detector without the need to create it ahead of time. The group will be created automatically if it doesn’t exist.
Example usage:
gl = ExperimentalApi() # Create a group for all door-related detectors door_group = gl.create_detector_group("door-detectors") # Later, create detectors in this group door_open_detector = gl.create_detector( name="front-door-open", query="Is the front door open?", detector_group=door_group )
- Parameters:
name (str) – The name of the detector group. This should be descriptive and unique within your organization.
- Returns:
A DetectorGroup object corresponding to the newly created detector group
- Return type:
DetectorGroup
- create_multiclass_detector(name: str, query: str, class_names: List[str], *, group_name: str | None = None, confidence_threshold: float | None = None, patience_time: float | None = None, pipeline_config: str | None = None, metadata: dict | str | None = None) Detector
Creates a multiclass detector with the given name and query.
Example usage:
gl = ExperimentalApi() detector = gl.create_multiclass_detector( name="Traffic Light Detector", query="What color is the traffic light?", class_names=["Red", "Yellow", "Green"] ) # Use the detector to classify a traffic light image_query = gl.ask_ml(detector, "path/to/image.jpg") print(f"Traffic light is {image_query.result.label}") print(f"Confidence: {image_query.result.confidence}")
- Parameters:
name (str) – A short, descriptive name for the detector.
query (str) – A question about classifying objects in the image.
class_names (List[str]) – List of possible class labels for classification.
group_name (str | None) – Optional name of a group to organize related detectors together.
confidence_threshold (float | None) – A value between 1/num_classes and 1 that sets the minimum confidence level required for the ML model’s predictions. If confidence is below this threshold, the query may be sent for human review.
patience_time (float | None) – The maximum time in seconds that Groundlight will attempt to generate a confident prediction before falling back to human review. Defaults to 30 seconds.
pipeline_config (str | None) – Advanced usage only. Configuration string needed to instantiate a specific prediction pipeline for this detector.
metadata (dict | str | None) – A dictionary or JSON string containing custom key/value pairs to associate with the detector (limited to 1KB). This metadata can be used to store additional information like location, purpose, or related system IDs. You can retrieve this metadata later by calling get_detector().
- Returns:
The created Detector object
- Return type:
- create_note(detector: str | Detector, note: str, image: str | bytes | Image | BytesIO | BufferedReader | UnavailableModule | None = None) None
Adds a note to a given detector.
Example usage:
gl = ExperimentalApi() detector = gl.get_detector("det_123") gl.create_note(detector, "Please label doors that are slightly ajar as 'YES'") # With an image attachment gl.create_note( detector, "Door that is slightly ajar and should be labeled 'YES'", image="path/to/image.jpg" )
- Parameters:
detector (str | Detector) – The detector object or detector ID string to add the note to
note (str) – The text content of the note to add
image (str | bytes | Image | BytesIO | BufferedReader | UnavailableModule | None) – Optional image to attach to the note.
- Return type:
None
- create_roi(label: str, top_left: Tuple[float, float], bottom_right: Tuple[float, float]) ROI
Creates a Region of Interest (ROI) object that can be used to specify areas of interest in images. Certain detectors (such as Count-mode detectors) may emit ROIs as part of their output. Providing an ROI can help improve the accuracy of such detectors.
Note
ROI functionality is only available to Pro tier and higher. If you would like to learn more, reach out to us at https://groundlight.ai
Example usage:
gl = ExperimentalApi() # Create an ROI for a door in the image door_roi = gl.create_roi( label="door", top_left=(0.2, 0.3), # Coordinates are normalized (0-1) bottom_right=(0.4, 0.8) # Coordinates are normalized (0-1) ) # Use the ROI when submitting an image query query = gl.submit_image_query( detector="door-detector", image=image_bytes, rois=[door_roi] )
- Parameters:
label (str) – A descriptive label for the object or area contained in the ROI
top_left (Tuple[float, float]) – Tuple of (x, y) coordinates for the top-left corner, normalized to [0,1]
bottom_right (Tuple[float, float]) – Tuple of (x, y) coordinates for the bottom-right corner, normalized to [0,1]
- Returns:
An ROI object that can be used in image queries
- Return type:
ROI
- create_rule(detector: str | Detector, rule_name: str, channel: str | ChannelEnum, recipient: str, *, alert_on: str | VerbEnum = 'CHANGED_TO', enabled: bool = True, include_image: bool = False, condition_parameters: str | dict | None = None, snooze_time_enabled: bool = False, snooze_time_value: int = 3600, snooze_time_unit: str = 'SECONDS', human_review_required: bool = False) Rule
Creates a notification rule for a detector that will send alerts based on specified conditions.
A notification rule allows you to configure automated alerts when certain conditions are met, such as when a detector’s prediction changes or maintains a particular state.
Note
Currently, only binary mode detectors (YES/NO answers) are supported for notification rules.
Example usage:
gl = ExperimentalApi() # Create a rule to send email alerts when door is detected as open rule = gl.create_rule( detector="door_detector", rule_name="Door Open Alert", channel="EMAIL", recipient="alerts@company.com", alert_on="CHANGED_TO", condition_parameters={"label": "YES"}, include_image=True ) # Create a rule for consecutive motion detections via SMS rule = gl.create_rule( detector="motion_detector", rule_name="Repeated Motion Alert", channel="TEXT", recipient="+1234567890", alert_on="ANSWERED_CONSECUTIVELY", condition_parameters={ "num_consecutive_labels": 3, "label": "YES" }, snooze_time_enabled=True, snooze_time_value=1, snooze_time_unit="HOURS" )
- Parameters:
detector (str | Detector) – The detector ID or Detector object to add the rule to
rule_name (str) – A unique name to identify this rule
channel (str | ChannelEnum) – Notification channel - either “EMAIL” or “TEXT”
recipient (str) – Email address or phone number to receive notifications
alert_on (str | VerbEnum) – what to alert on. One of ANSWERED_CONSECUTIVELY, ANSWERED_WITHIN_TIME, CHANGED_TO, NO_CHANGE, NO_QUERIES
enabled (bool) – Whether the rule should be active when created (default True)
include_image (bool) – Whether to attach the triggering image to notifications (default False)
condition_parameters (str | dict | None) – Additional parameters for the alert condition: - For ANSWERED_CONSECUTIVELY: {“num_consecutive_labels”: N, “label”: “YES/NO”} - For CHANGED_TO: {“label”: “YES/NO”} - For time-based conditions: {“time_value”: N, “time_unit”: “MINUTES/HOURS/DAYS”}
snooze_time_enabled (bool) – Enable notification snoozing to prevent alert spam (default False)
snooze_time_value (int) – Duration of snooze period (default 3600)
snooze_time_unit (str) – Unit for snooze duration - “SECONDS”, “MINUTES”, “HOURS”, or “DAYS” (default “SECONDS”)
human_review_required (bool) – Require human verification before sending alerts (default False)
- Returns:
The created Rule object
- Return type:
- delete_all_rules(detector: None | str | Detector = None) int
Deletes all rules associated with the given detector. If no detector is specified, deletes all rules in the account.
WARNING: If no detector is specified, this will delete ALL rules in your account. This action cannot be undone. Use with caution.
Example usage:
gl = ExperimentalApi() # Delete all rules for a specific detector detector = gl.get_detector("my_detector") num_deleted = gl.delete_all_rules(detector) print(f"Deleted {num_deleted} rules") # Delete all rules in the account num_deleted = gl.delete_all_rules() print(f"Deleted {num_deleted} rules")
- Parameters:
detector (None | str | Detector) – the detector to delete the rules from. If None, deletes all rules.
- Returns:
the number of rules deleted
- Return type:
int
- delete_rule(action_id: int) None
Deletes the rule with the given id.
Example usage:
gl = ExperimentalApi() # Delete a specific rule gl.delete_rule(action_id=123)
- Parameters:
action_id (int) – the id of the rule to delete
- Return type:
None
- get_image(iq_id: str) bytes
Get the image associated with the given image query ID.
Example usage:
gl = ExperimentalApi() # Get image from an image query iq = gl.get_image_query("iq_123") image_bytes = gl.get_image(iq.id) # Open with PIL - returns RGB order from PIL import Image image = Image.open(gl.get_image(iq.id)) # Returns RGB image # Open with numpy via PIL - returns RGB order import numpy as np from io import BytesIO image = np.array(Image.open(gl.get_image(iq.id))) # Returns RGB array # Open with OpenCV - returns BGR order import cv2 import numpy as np nparr = np.frombuffer(image_bytes, np.uint8) image = cv2.imdecode(nparr, cv2.IMREAD_COLOR) # Returns BGR array # To convert to RGB if needed: # image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
- Parameters:
iq_id (str) – The ID of the image query to get the image from
- Returns:
The image as a byte array that can be used with PIL or other image libraries
- Return type:
bytes
- get_notes(detector: str | Detector) Dict[str, Any]
Retrieves all notes associated with a detector.
Example usage:
gl = ExperimentalApi() detector = gl.get_detector("det_123") notes = gl.get_notes(detector) # notes = { # "CUSTOMER": ["Customer note 1", "Customer note 2"], # "GL": ["Groundlight note 1"] # }
- Parameters:
detector (str | Detector) – The detector object or ID string to retrieve notes for
- Returns:
A dictionary containing notes organized by source (“CUSTOMER” or “GL”), where each source maps to a list of note strings
- Return type:
Dict[str, Any]
- get_rule(action_id: int) Rule
Gets the rule with the given id.
Example usage:
gl = ExperimentalApi() # Get an existing rule by ID rule = gl.get_rule(action_id=123) print(f"Rule name: {rule.name}") print(f"Rule enabled: {rule.enabled}")
- Parameters:
action_id (int) – the id of the rule to get
- Returns:
the Rule object with the given id
- Return type:
- list_detector_groups() List[DetectorGroup]
Gets a list of all detector groups in your account.
Example usage:
gl = ExperimentalApi() # Get all detector groups groups = gl.list_detector_groups() # Print information about each group for group in groups: print(f"Group name: {group.name}") print(f"Group ID: {group.id}")
- Returns:
A list of DetectorGroup objects representing all detector groups in your account
- Return type:
List[DetectorGroup]
- list_rules(page=1, page_size=10) PaginatedRuleList
Gets a paginated list of all rules.
Example usage:
gl = ExperimentalApi() # Get first page of rules rules = gl.list_rules(page=1, page_size=10) print(f"Total rules: {rules.count}") # Iterate through rules on current page for rule in rules.results: print(f"Rule {rule.id}: {rule.name}") # Get next page next_page = gl.list_rules(page=2, page_size=10)
- Parameters:
page – Page number to retrieve (default: 1)
page_size – Number of rules per page (default: 10)
- Returns:
PaginatedRuleList containing the rules and pagination info
- Return type:
- reset_detector(detector: str | Detector) None
Removes all image queries and training data for the given detector. This effectively resets the detector to its initial state, allowing you to start fresh with new training data.
Warning
This operation cannot be undone. All image queries and training data will be deleted.
Example usage:
gl = ExperimentalApi() # Using a detector object detector = gl.get_detector("det_abc123") gl.reset_detector(detector) # Using a detector ID string directly gl.reset_detector("det_abc123")
- update_detector_escalation_type(detector: str | Detector, escalation_type: str) None
Updates the escalation type of the given detector, controlling whether queries can be sent to human labelers when ML confidence is low.
This is particularly useful for controlling costs. When set to “NO_HUMAN_LABELING”, queries will only receive ML predictions, even if confidence is low. When set to “STANDARD”, low-confidence queries may be sent to human labelers for verification.
Example usage:
gl = ExperimentalApi() # Using a detector object detector = gl.get_detector("det_abc123") # Disable human labeling gl.update_detector_escalation_type(detector, "NO_HUMAN_LABELING") # Re-enable standard human labeling gl.update_detector_escalation_type("det_abc123", "STANDARD")
- Parameters:
detector (str | Detector) – Either a Detector object or a detector ID string starting with “det_”. The detector whose escalation type should be updated.
escalation_type (str) – The new escalation type setting. Must be one of: - “STANDARD”: Allow human labeling for low-confidence queries - “NO_HUMAN_LABELING”: Never send queries to human labelers
- Returns:
None
- Raises:
ValueError – If escalation_type is not one of the allowed values
- Return type:
None
- update_detector_name(detector: str | Detector, name: str) None
Updates the name of the given detector
Example usage:
gl = ExperimentalApi() # Using a detector object detector = gl.get_detector("det_abc123") gl.update_detector_name(detector, "new_detector_name") # Using a detector ID string directly gl.update_detector_name("det_abc123", "new_detector_name")
- update_detector_status(detector: str | Detector, enabled: bool) None
Updates the status of the given detector. When a detector is disabled (enabled=False), it will not accept or process any new image queries. Existing queries will not be affected.
Example usage:
gl = ExperimentalApi() # Using a detector object detector = gl.get_detector("det_abc123") gl.update_detector_status(detector, enabled=False) # Disable the detector # Using a detector ID string directly gl.update_detector_status("det_abc123", enabled=True) # Enable the detector
- Parameters:
detector (str | Detector) – Either a Detector object or a detector ID string starting with “det_”. The detector whose status should be updated.
enabled (bool) – Boolean indicating whether the detector should be enabled (True) or disabled (False). When disabled, the detector will not process new queries.
- Returns:
None
- Return type:
None
API Response Objects
- pydantic model model.Detector
Spec for serializing a detector object in the public API.
Show JSON schema
{ "title": "Detector", "description": "Spec for serializing a detector object in the public API.", "type": "object", "properties": { "id": { "description": "A unique ID for this object.", "title": "Id", "type": "string" }, "type": { "$ref": "#/$defs/DetectorTypeEnum", "description": "The type of this object." }, "created_at": { "description": "When this detector was created.", "format": "date-time", "title": "Created At", "type": "string" }, "name": { "description": "A short, descriptive name for the detector.", "maxLength": 200, "title": "Name", "type": "string" }, "query": { "description": "A question about the image.", "title": "Query", "type": "string" }, "group_name": { "description": "Which group should this detector be part of?", "title": "Group Name", "type": "string" }, "confidence_threshold": { "default": 0.9, "description": "If the detector's prediction is below this confidence threshold, send the image query for human review.", "maximum": 1.0, "minimum": 0.0, "title": "Confidence Threshold", "type": "number" }, "patience_time": { "default": 30.0, "description": "How long Groundlight will attempt to generate a confident prediction", "maximum": 3600.0, "minimum": 0.0, "title": "Patience Time", "type": "number" }, "metadata": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "description": "Metadata about the detector.", "title": "Metadata" }, "mode": { "$ref": "#/$defs/ModeEnum" }, "mode_configuration": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "title": "Mode Configuration" }, "status": { "anyOf": [ { "$ref": "#/$defs/StatusEnum" }, { "$ref": "#/$defs/BlankEnum" }, { "type": "null" } ], "default": null, "title": "Status" }, "escalation_type": { "anyOf": [ { "$ref": "#/$defs/EscalationTypeEnum" }, { "type": "null" } ], "default": null, "description": "Category that define internal proccess for labeling image queries\n\n* `STANDARD` - STANDARD\n* `NO_HUMAN_LABELING` - NO_HUMAN_LABELING" } }, "$defs": { "BlankEnum": { "enum": [ "" ], "title": "BlankEnum", "type": "string" }, "DetectorTypeEnum": { "enum": [ "detector" ], "title": "DetectorTypeEnum", "type": "string" }, "EscalationTypeEnum": { "description": "* `STANDARD` - STANDARD\n* `NO_HUMAN_LABELING` - NO_HUMAN_LABELING", "enum": [ "STANDARD", "NO_HUMAN_LABELING" ], "title": "EscalationTypeEnum", "type": "string" }, "ModeEnum": { "enum": [ "BINARY", "COUNT", "MULTI_CLASS" ], "title": "ModeEnum", "type": "string" }, "StatusEnum": { "description": "* `ON` - ON\n* `OFF` - OFF", "enum": [ "ON", "OFF" ], "title": "StatusEnum", "type": "string" } }, "required": [ "id", "type", "created_at", "name", "query", "group_name", "metadata", "mode", "mode_configuration" ] }
- Fields:
- field confidence_threshold: confloat(ge=0.0, le=1.0) = 0.9
If the detector’s prediction is below this confidence threshold, send the image query for human review.
- Constraints:
ge = 0.0
le = 1.0
- field created_at: datetime [Required]
When this detector was created.
- field escalation_type: EscalationTypeEnum | None = None
Category that define internal proccess for labeling image queries
STANDARD - STANDARD
NO_HUMAN_LABELING - NO_HUMAN_LABELING
- field group_name: str [Required]
Which group should this detector be part of?
- field id: str [Required]
A unique ID for this object.
- field metadata: Dict[str, Any] | None [Required]
Metadata about the detector.
- field mode: ModeEnum [Required]
- field mode_configuration: Dict[str, Any] | None [Required]
- field name: constr(max_length=200) [Required]
A short, descriptive name for the detector.
- Constraints:
max_length = 200
- field patience_time: confloat(ge=0.0, le=3600.0) = 30.0
How long Groundlight will attempt to generate a confident prediction
- Constraints:
ge = 0.0
le = 3600.0
- field query: str [Required]
A question about the image.
- field status: StatusEnum | BlankEnum | None = None
- field type: DetectorTypeEnum [Required]
The type of this object.
- pydantic model model.ImageQuery
Spec for serializing a image-query object in the public API.
Show JSON schema
{ "title": "ImageQuery", "description": "Spec for serializing a image-query object in the public API.", "type": "object", "properties": { "metadata": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "description": "Metadata about the image query.", "title": "Metadata" }, "id": { "description": "A unique ID for this object.", "title": "Id", "type": "string" }, "type": { "$ref": "#/$defs/ImageQueryTypeEnum", "description": "The type of this object." }, "created_at": { "description": "When was this detector created?", "format": "date-time", "title": "Created At", "type": "string" }, "query": { "description": "A question about the image.", "title": "Query", "type": "string" }, "detector_id": { "description": "Which detector was used on this image query?", "title": "Detector Id", "type": "string" }, "result_type": { "$ref": "#/$defs/ResultTypeEnum", "description": "What type of result are we returning?" }, "result": { "anyOf": [ { "$ref": "#/$defs/BinaryClassificationResult" }, { "$ref": "#/$defs/CountingResult" }, { "$ref": "#/$defs/MultiClassificationResult" }, { "type": "null" } ], "title": "Result" }, "patience_time": { "description": "How long to wait for a confident response.", "title": "Patience Time", "type": "number" }, "confidence_threshold": { "description": "Min confidence needed to accept the response of the image query.", "title": "Confidence Threshold", "type": "number" }, "rois": { "anyOf": [ { "items": { "$ref": "#/$defs/ROI" }, "type": "array" }, { "type": "null" } ], "description": "An array of regions of interest (bounding boxes) collected on image", "title": "Rois" }, "text": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "A text field on image query.", "title": "Text" } }, "$defs": { "BBoxGeometry": { "description": "Mixin for serializers to handle data in the StrictBaseModel format", "properties": { "left": { "title": "Left", "type": "number" }, "top": { "title": "Top", "type": "number" }, "right": { "title": "Right", "type": "number" }, "bottom": { "title": "Bottom", "type": "number" }, "x": { "title": "X", "type": "number" }, "y": { "title": "Y", "type": "number" } }, "required": [ "left", "top", "right", "bottom", "x", "y" ], "title": "BBoxGeometry", "type": "object" }, "BinaryClassificationResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null }, "label": { "$ref": "#/$defs/Label" } }, "required": [ "label" ], "title": "BinaryClassificationResult", "type": "object" }, "CountingResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null }, "count": { "minimum": 0, "title": "Count", "type": "integer" }, "greater_than_max": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Greater Than Max" } }, "required": [ "count" ], "title": "CountingResult", "type": "object" }, "ImageQueryTypeEnum": { "enum": [ "image_query" ], "title": "ImageQueryTypeEnum", "type": "string" }, "Label": { "enum": [ "YES", "NO", "UNCLEAR" ], "title": "Label", "type": "string" }, "MultiClassificationResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null }, "label": { "title": "Label", "type": "string" } }, "required": [ "label" ], "title": "MultiClassificationResult", "type": "object" }, "ROI": { "description": "Mixin for serializers to handle data in the StrictBaseModel format", "properties": { "label": { "description": "The label of the bounding box.", "title": "Label", "type": "string" }, "score": { "description": "The confidence of the bounding box.", "title": "Score", "type": "number" }, "geometry": { "$ref": "#/$defs/BBoxGeometry" } }, "required": [ "label", "score", "geometry" ], "title": "ROI", "type": "object" }, "ResultTypeEnum": { "enum": [ "binary_classification", "counting", "multi_classification" ], "title": "ResultTypeEnum", "type": "string" }, "Source": { "enum": [ "STILL_PROCESSING", "CLOUD", "USER", "CLOUD_ENSEMBLE", "ALGORITHM" ], "title": "Source", "type": "string" } }, "required": [ "metadata", "id", "type", "created_at", "query", "detector_id", "result_type", "result", "patience_time", "confidence_threshold", "rois", "text" ] }
- Fields:
- field confidence_threshold: float [Required]
Min confidence needed to accept the response of the image query.
- field created_at: datetime [Required]
When was this detector created?
- field detector_id: str [Required]
Which detector was used on this image query?
- field id: str [Required]
A unique ID for this object.
- field metadata: Dict[str, Any] | None [Required]
Metadata about the image query.
- field patience_time: float [Required]
How long to wait for a confident response.
- field query: str [Required]
A question about the image.
- field result: BinaryClassificationResult | CountingResult | MultiClassificationResult | None [Required]
- field result_type: ResultTypeEnum [Required]
What type of result are we returning?
- field rois: List[ROI] | None [Required]
An array of regions of interest (bounding boxes) collected on image
- field text: str | None [Required]
A text field on image query.
- field type: ImageQueryTypeEnum [Required]
The type of this object.
- pydantic model model.PaginatedDetectorList
Show JSON schema
{ "title": "PaginatedDetectorList", "type": "object", "properties": { "count": { "example": 123, "title": "Count", "type": "integer" }, "next": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=4", "title": "Next" }, "previous": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=2", "title": "Previous" }, "results": { "items": { "$ref": "#/$defs/Detector" }, "title": "Results", "type": "array" } }, "$defs": { "BlankEnum": { "enum": [ "" ], "title": "BlankEnum", "type": "string" }, "Detector": { "description": "Spec for serializing a detector object in the public API.", "properties": { "id": { "description": "A unique ID for this object.", "title": "Id", "type": "string" }, "type": { "$ref": "#/$defs/DetectorTypeEnum", "description": "The type of this object." }, "created_at": { "description": "When this detector was created.", "format": "date-time", "title": "Created At", "type": "string" }, "name": { "description": "A short, descriptive name for the detector.", "maxLength": 200, "title": "Name", "type": "string" }, "query": { "description": "A question about the image.", "title": "Query", "type": "string" }, "group_name": { "description": "Which group should this detector be part of?", "title": "Group Name", "type": "string" }, "confidence_threshold": { "default": 0.9, "description": "If the detector's prediction is below this confidence threshold, send the image query for human review.", "maximum": 1.0, "minimum": 0.0, "title": "Confidence Threshold", "type": "number" }, "patience_time": { "default": 30.0, "description": "How long Groundlight will attempt to generate a confident prediction", "maximum": 3600.0, "minimum": 0.0, "title": "Patience Time", "type": "number" }, "metadata": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "description": "Metadata about the detector.", "title": "Metadata" }, "mode": { "$ref": "#/$defs/ModeEnum" }, "mode_configuration": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "title": "Mode Configuration" }, "status": { "anyOf": [ { "$ref": "#/$defs/StatusEnum" }, { "$ref": "#/$defs/BlankEnum" }, { "type": "null" } ], "default": null, "title": "Status" }, "escalation_type": { "anyOf": [ { "$ref": "#/$defs/EscalationTypeEnum" }, { "type": "null" } ], "default": null, "description": "Category that define internal proccess for labeling image queries\n\n* `STANDARD` - STANDARD\n* `NO_HUMAN_LABELING` - NO_HUMAN_LABELING" } }, "required": [ "id", "type", "created_at", "name", "query", "group_name", "metadata", "mode", "mode_configuration" ], "title": "Detector", "type": "object" }, "DetectorTypeEnum": { "enum": [ "detector" ], "title": "DetectorTypeEnum", "type": "string" }, "EscalationTypeEnum": { "description": "* `STANDARD` - STANDARD\n* `NO_HUMAN_LABELING` - NO_HUMAN_LABELING", "enum": [ "STANDARD", "NO_HUMAN_LABELING" ], "title": "EscalationTypeEnum", "type": "string" }, "ModeEnum": { "enum": [ "BINARY", "COUNT", "MULTI_CLASS" ], "title": "ModeEnum", "type": "string" }, "StatusEnum": { "description": "* `ON` - ON\n* `OFF` - OFF", "enum": [ "ON", "OFF" ], "title": "StatusEnum", "type": "string" } }, "required": [ "count", "results" ] }
- Fields:
- field count: int [Required]
- field next: AnyUrl | None = None
- field previous: AnyUrl | None = None
- pydantic model model.PaginatedImageQueryList
Show JSON schema
{ "title": "PaginatedImageQueryList", "type": "object", "properties": { "count": { "example": 123, "title": "Count", "type": "integer" }, "next": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=4", "title": "Next" }, "previous": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=2", "title": "Previous" }, "results": { "items": { "$ref": "#/$defs/ImageQuery" }, "title": "Results", "type": "array" } }, "$defs": { "BBoxGeometry": { "description": "Mixin for serializers to handle data in the StrictBaseModel format", "properties": { "left": { "title": "Left", "type": "number" }, "top": { "title": "Top", "type": "number" }, "right": { "title": "Right", "type": "number" }, "bottom": { "title": "Bottom", "type": "number" }, "x": { "title": "X", "type": "number" }, "y": { "title": "Y", "type": "number" } }, "required": [ "left", "top", "right", "bottom", "x", "y" ], "title": "BBoxGeometry", "type": "object" }, "BinaryClassificationResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null }, "label": { "$ref": "#/$defs/Label" } }, "required": [ "label" ], "title": "BinaryClassificationResult", "type": "object" }, "CountingResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null }, "count": { "minimum": 0, "title": "Count", "type": "integer" }, "greater_than_max": { "anyOf": [ { "type": "boolean" }, { "type": "null" } ], "default": null, "title": "Greater Than Max" } }, "required": [ "count" ], "title": "CountingResult", "type": "object" }, "ImageQuery": { "description": "Spec for serializing a image-query object in the public API.", "properties": { "metadata": { "anyOf": [ { "type": "object" }, { "type": "null" } ], "description": "Metadata about the image query.", "title": "Metadata" }, "id": { "description": "A unique ID for this object.", "title": "Id", "type": "string" }, "type": { "$ref": "#/$defs/ImageQueryTypeEnum", "description": "The type of this object." }, "created_at": { "description": "When was this detector created?", "format": "date-time", "title": "Created At", "type": "string" }, "query": { "description": "A question about the image.", "title": "Query", "type": "string" }, "detector_id": { "description": "Which detector was used on this image query?", "title": "Detector Id", "type": "string" }, "result_type": { "$ref": "#/$defs/ResultTypeEnum", "description": "What type of result are we returning?" }, "result": { "anyOf": [ { "$ref": "#/$defs/BinaryClassificationResult" }, { "$ref": "#/$defs/CountingResult" }, { "$ref": "#/$defs/MultiClassificationResult" }, { "type": "null" } ], "title": "Result" }, "patience_time": { "description": "How long to wait for a confident response.", "title": "Patience Time", "type": "number" }, "confidence_threshold": { "description": "Min confidence needed to accept the response of the image query.", "title": "Confidence Threshold", "type": "number" }, "rois": { "anyOf": [ { "items": { "$ref": "#/$defs/ROI" }, "type": "array" }, { "type": "null" } ], "description": "An array of regions of interest (bounding boxes) collected on image", "title": "Rois" }, "text": { "anyOf": [ { "type": "string" }, { "type": "null" } ], "description": "A text field on image query.", "title": "Text" } }, "required": [ "metadata", "id", "type", "created_at", "query", "detector_id", "result_type", "result", "patience_time", "confidence_threshold", "rois", "text" ], "title": "ImageQuery", "type": "object" }, "ImageQueryTypeEnum": { "enum": [ "image_query" ], "title": "ImageQueryTypeEnum", "type": "string" }, "Label": { "enum": [ "YES", "NO", "UNCLEAR" ], "title": "Label", "type": "string" }, "MultiClassificationResult": { "properties": { "confidence": { "anyOf": [ { "maximum": 1.0, "minimum": 0.0, "type": "number" }, { "type": "null" } ], "default": null, "title": "Confidence" }, "source": { "anyOf": [ { "$ref": "#/$defs/Source" }, { "type": "null" } ], "default": null }, "label": { "title": "Label", "type": "string" } }, "required": [ "label" ], "title": "MultiClassificationResult", "type": "object" }, "ROI": { "description": "Mixin for serializers to handle data in the StrictBaseModel format", "properties": { "label": { "description": "The label of the bounding box.", "title": "Label", "type": "string" }, "score": { "description": "The confidence of the bounding box.", "title": "Score", "type": "number" }, "geometry": { "$ref": "#/$defs/BBoxGeometry" } }, "required": [ "label", "score", "geometry" ], "title": "ROI", "type": "object" }, "ResultTypeEnum": { "enum": [ "binary_classification", "counting", "multi_classification" ], "title": "ResultTypeEnum", "type": "string" }, "Source": { "enum": [ "STILL_PROCESSING", "CLOUD", "USER", "CLOUD_ENSEMBLE", "ALGORITHM" ], "title": "Source", "type": "string" } }, "required": [ "count", "results" ] }
- Fields:
- field count: int [Required]
- field next: AnyUrl | None = None
- field previous: AnyUrl | None = None
- field results: List[ImageQuery] [Required]
- pydantic model model.Rule
Show JSON schema
{ "title": "Rule", "type": "object", "properties": { "id": { "title": "Id", "type": "integer" }, "detector_id": { "title": "Detector Id", "type": "string" }, "detector_name": { "title": "Detector Name", "type": "string" }, "name": { "maxLength": 44, "title": "Name", "type": "string" }, "enabled": { "default": true, "title": "Enabled", "type": "boolean" }, "snooze_time_enabled": { "default": false, "title": "Snooze Time Enabled", "type": "boolean" }, "snooze_time_value": { "default": 0, "minimum": 0, "title": "Snooze Time Value", "type": "integer" }, "snooze_time_unit": { "$ref": "#/$defs/SnoozeTimeUnitEnum", "default": "DAYS" }, "human_review_required": { "default": false, "title": "Human Review Required", "type": "boolean" }, "condition": { "$ref": "#/$defs/Condition" }, "action": { "anyOf": [ { "$ref": "#/$defs/Action" }, { "$ref": "#/$defs/ActionList" } ], "title": "Action" } }, "$defs": { "Action": { "properties": { "channel": { "$ref": "#/$defs/ChannelEnum" }, "recipient": { "title": "Recipient", "type": "string" }, "include_image": { "title": "Include Image", "type": "boolean" } }, "required": [ "channel", "recipient", "include_image" ], "title": "Action", "type": "object" }, "ActionList": { "items": { "$ref": "#/$defs/Action" }, "title": "ActionList", "type": "array" }, "ChannelEnum": { "enum": [ "TEXT", "EMAIL" ], "title": "ChannelEnum", "type": "string" }, "Condition": { "properties": { "verb": { "$ref": "#/$defs/VerbEnum" }, "parameters": { "title": "Parameters", "type": "object" } }, "required": [ "verb", "parameters" ], "title": "Condition", "type": "object" }, "SnoozeTimeUnitEnum": { "description": "* `DAYS` - DAYS\n* `HOURS` - HOURS\n* `MINUTES` - MINUTES\n* `SECONDS` - SECONDS", "enum": [ "DAYS", "HOURS", "MINUTES", "SECONDS" ], "title": "SnoozeTimeUnitEnum", "type": "string" }, "VerbEnum": { "description": "* `ANSWERED_CONSECUTIVELY` - ANSWERED_CONSECUTIVELY\n* `ANSWERED_WITHIN_TIME` - ANSWERED_WITHIN_TIME\n* `CHANGED_TO` - CHANGED_TO\n* `NO_CHANGE` - NO_CHANGE\n* `NO_QUERIES` - NO_QUERIES", "enum": [ "ANSWERED_CONSECUTIVELY", "ANSWERED_WITHIN_TIME", "CHANGED_TO", "NO_CHANGE", "NO_QUERIES" ], "title": "VerbEnum", "type": "string" } }, "required": [ "id", "detector_id", "detector_name", "name", "condition", "action" ] }
- Fields:
- field action: Action | ActionList [Required]
- field condition: Condition [Required]
- field detector_id: str [Required]
- field detector_name: str [Required]
- field enabled: bool = True
- field human_review_required: bool = False
- field id: int [Required]
- field name: constr(max_length=44) [Required]
- Constraints:
max_length = 44
- field snooze_time_enabled: bool = False
- field snooze_time_unit: SnoozeTimeUnitEnum = 'DAYS'
- field snooze_time_value: conint(ge=0) = 0
- Constraints:
ge = 0
- pydantic model model.PaginatedRuleList
Show JSON schema
{ "title": "PaginatedRuleList", "type": "object", "properties": { "count": { "example": 123, "title": "Count", "type": "integer" }, "next": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=4", "title": "Next" }, "previous": { "anyOf": [ { "format": "uri", "minLength": 1, "type": "string" }, { "type": "null" } ], "default": null, "example": "http://api.example.org/accounts/?page=2", "title": "Previous" }, "results": { "items": { "$ref": "#/$defs/Rule" }, "title": "Results", "type": "array" } }, "$defs": { "Action": { "properties": { "channel": { "$ref": "#/$defs/ChannelEnum" }, "recipient": { "title": "Recipient", "type": "string" }, "include_image": { "title": "Include Image", "type": "boolean" } }, "required": [ "channel", "recipient", "include_image" ], "title": "Action", "type": "object" }, "ActionList": { "items": { "$ref": "#/$defs/Action" }, "title": "ActionList", "type": "array" }, "ChannelEnum": { "enum": [ "TEXT", "EMAIL" ], "title": "ChannelEnum", "type": "string" }, "Condition": { "properties": { "verb": { "$ref": "#/$defs/VerbEnum" }, "parameters": { "title": "Parameters", "type": "object" } }, "required": [ "verb", "parameters" ], "title": "Condition", "type": "object" }, "Rule": { "properties": { "id": { "title": "Id", "type": "integer" }, "detector_id": { "title": "Detector Id", "type": "string" }, "detector_name": { "title": "Detector Name", "type": "string" }, "name": { "maxLength": 44, "title": "Name", "type": "string" }, "enabled": { "default": true, "title": "Enabled", "type": "boolean" }, "snooze_time_enabled": { "default": false, "title": "Snooze Time Enabled", "type": "boolean" }, "snooze_time_value": { "default": 0, "minimum": 0, "title": "Snooze Time Value", "type": "integer" }, "snooze_time_unit": { "$ref": "#/$defs/SnoozeTimeUnitEnum", "default": "DAYS" }, "human_review_required": { "default": false, "title": "Human Review Required", "type": "boolean" }, "condition": { "$ref": "#/$defs/Condition" }, "action": { "anyOf": [ { "$ref": "#/$defs/Action" }, { "$ref": "#/$defs/ActionList" } ], "title": "Action" } }, "required": [ "id", "detector_id", "detector_name", "name", "condition", "action" ], "title": "Rule", "type": "object" }, "SnoozeTimeUnitEnum": { "description": "* `DAYS` - DAYS\n* `HOURS` - HOURS\n* `MINUTES` - MINUTES\n* `SECONDS` - SECONDS", "enum": [ "DAYS", "HOURS", "MINUTES", "SECONDS" ], "title": "SnoozeTimeUnitEnum", "type": "string" }, "VerbEnum": { "description": "* `ANSWERED_CONSECUTIVELY` - ANSWERED_CONSECUTIVELY\n* `ANSWERED_WITHIN_TIME` - ANSWERED_WITHIN_TIME\n* `CHANGED_TO` - CHANGED_TO\n* `NO_CHANGE` - NO_CHANGE\n* `NO_QUERIES` - NO_QUERIES", "enum": [ "ANSWERED_CONSECUTIVELY", "ANSWERED_WITHIN_TIME", "CHANGED_TO", "NO_CHANGE", "NO_QUERIES" ], "title": "VerbEnum", "type": "string" } }, "required": [ "count", "results" ] }
- Fields:
- field count: int [Required]
- field next: AnyUrl | None = None
- field previous: AnyUrl | None = None