Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion packages/uipath-openai-agents/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "uipath-openai-agents"
version = "0.0.3"
version = "0.0.4"
description = "UiPath OpenAI Agents SDK"
readme = "README.md"
requires-python = ">=3.11"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,10 @@ uv run uipath init --infer-bindings
| `--input-file` | value | `Sentinel.UNSET` | Alias for '-f/--file' arguments |
| `--output-file` | value | `Sentinel.UNSET` | File path where the output will be written |
| `--trace-file` | value | `Sentinel.UNSET` | File path where the trace spans will be written (JSON Lines format) |
| `--state-file` | value | `Sentinel.UNSET` | File path where the state file is stored for persisting execution state. If not provided, a temporary file will be used. |
| `--debug` | flag | false | Enable debugging with debugpy. The process will wait for a debugger to attach. |
| `--debug-port` | value | `5678` | Port for the debug server (default: 5678) |
| `--keep-state-file` | flag | false | Keep the temporary state file even when not resuming and no job id is provided |

**Usage Examples:**

Expand Down Expand Up @@ -99,6 +101,10 @@ uv run uipath run --resume
enable_mocker_cache: Enable caching for LLM mocker responses
report_coverage: Report evaluation coverage
model_settings_id: Model settings ID to override agent settings
trace_file: File path where traces will be written in JSONL format
max_llm_concurrency: Maximum concurrent LLM requests
input_overrides: Input field overrides mapping (direct field override with deep merge)
resume: Resume execution from a previous suspended state


**Arguments:**
Expand All @@ -120,6 +126,8 @@ uv run uipath run --resume
| `--report-coverage` | flag | false | Report evaluation coverage |
| `--model-settings-id` | value | `"default"` | Model settings ID from evaluation set to override agent settings (default: 'default') |
| `--trace-file` | value | `Sentinel.UNSET` | File path where traces will be written in JSONL format |
| `--max-llm-concurrency` | value | `20` | Maximum concurrent LLM requests (default: 20) |
| `--resume` | flag | false | Resume execution from a previous suspended state |

**Usage Examples:**

Expand Down Expand Up @@ -226,6 +234,53 @@ The `uipath.json` file is automatically generated by `uipath init` and defines y

The UiPath CLI provides commands for interacting with UiPath platform services. These commands allow you to manage buckets, assets, jobs, and other resources.

### `uipath assets`

Manage UiPath assets.

Assets are key-value pairs that store configuration data, credentials,
and settings used by automation processes.

\b
Examples:
# List all assets in a folder
uipath assets list --folder-path "Shared"

# List with filter
uipath assets list --filter "ValueType eq 'Text'"

# List with ordering
uipath assets list --orderby "Name asc"


**Subcommands:**

**`uipath assets list`**

List assets in a folder.

\b
Examples:
uipath assets list
uipath assets list --folder-path "Shared"
uipath assets list --filter "ValueType eq 'Text'"
uipath assets list --filter "Name eq 'MyAsset'"
uipath assets list --orderby "Name asc"
uipath assets list --top 50 --skip 100


Options:
- `--filter`: OData $filter expression (default: `Sentinel.UNSET`)
- `--orderby`: OData $orderby expression (default: `Sentinel.UNSET`)
- `--top`: Maximum number of items to return (default: 100, max: 1000) (default: `100`)
- `--skip`: Number of items to skip (default: `0`)
- `--folder-path`: Folder path (e.g., "Shared"). Can also be set via UIPATH_FOLDER_PATH environment variable. (default: `Sentinel.UNSET`)
- `--folder-key`: Folder key (UUID) (default: `Sentinel.UNSET`)
- `--format`: Output format (overrides global) (default: `Sentinel.UNSET`)
- `--output`, `-o`: Output file (overrides global) (default: `Sentinel.UNSET`)

---

### `uipath buckets`

Manage UiPath storage buckets and files.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,25 @@ sdk = UiPath()
sdk = UiPath(base_url="https://cloud.uipath.com/...", secret="your_token")
```

### Agenthub

Agenthub service

```python
# Fetch available models from LLM Gateway discovery endpoint.
sdk.agenthub.get_available_llm_models(headers: dict[str, Any] | None=None) -> list[uipath.platform.agenthub.agenthub.LlmModel]

# Asynchronously fetch available models from LLM Gateway discovery endpoint.
sdk.agenthub.get_available_llm_models_async(headers: dict[str, Any] | None=None) -> list[uipath.platform.agenthub.agenthub.LlmModel]

# Start a system agent job.
sdk.agenthub.invoke_system_agent(agent_name: str, entrypoint: str, input_arguments: dict[str, Any] | None=None, folder_key: str | None=None, folder_path: str | None=None, headers: dict[str, Any] | None=None) -> str

# Asynchronously start a system agent and return the job.
sdk.agenthub.invoke_system_agent_async(agent_name: str, entrypoint: str, input_arguments: dict[str, Any] | None=None, folder_key: str | None=None, folder_path: str | None=None, headers: dict[str, Any] | None=None) -> str

```

### Api Client

Api Client service
Expand All @@ -31,6 +50,12 @@ service = sdk.api_client
Assets service

```python
# List assets using OData API with offset-based pagination.
sdk.assets.list(folder_path: Optional[str]=None, folder_key: Optional[str]=None, filter: Optional[str]=None, orderby: Optional[str]=None, skip: int=0, top: int=100) -> uipath.platform.common.paging.PagedResult[uipath.platform.orchestrator.assets.Asset]

# Asynchronously list assets using OData API with offset-based pagination.
sdk.assets.list_async(folder_path: Optional[str]=None, folder_key: Optional[str]=None, filter: Optional[str]=None, orderby: Optional[str]=None, skip: int=0, top: int=100) -> uipath.platform.common.paging.PagedResult[uipath.platform.orchestrator.assets.Asset]

# Retrieve an asset by its name.
sdk.assets.retrieve(name: str, folder_key: Optional[str]=None, folder_path: Optional[str]=None) -> uipath.platform.orchestrator.assets.UserAsset | uipath.platform.orchestrator.assets.Asset

Expand Down Expand Up @@ -340,12 +365,24 @@ sdk.documents.retrieve_ixp_extraction_result(project_id: str, tag: str, operatio
# Asynchronous version of the [`retrieve_ixp_extraction_result`][uipath.platform.documents._documents_service.DocumentsService.retrieve_ixp_extraction_result] method.
sdk.documents.retrieve_ixp_extraction_result_async(project_id: str, tag: str, operation_id: str) -> uipath.platform.documents.documents.ExtractionResponseIXP

# Retrieve the result of an IXP create validate extraction action operation (single-shot, non-blocking).
sdk.documents.retrieve_ixp_extraction_validation_result(project_id: str, tag: str, operation_id: str) -> uipath.platform.documents.documents.ValidateExtractionAction

# Asynchronous version of the [`retrieve_ixp_extraction_validation_result`][uipath.platform.documents._documents_service.DocumentsService.retrieve_ixp_extraction_validation_result] method.
sdk.documents.retrieve_ixp_extraction_validation_result_async(project_id: str, tag: str, operation_id: str) -> uipath.platform.documents.documents.ValidateExtractionAction

# Start an IXP extraction process without waiting for results (non-blocking).
sdk.documents.start_ixp_extraction(project_name: str, tag: str, file: Union[IO[bytes], bytes, str, NoneType]=None, file_path: Optional[str]=None) -> uipath.platform.documents.documents.StartExtractionResponse

# Asynchronous version of the [`start_ixp_extraction`][uipath.platform.documents._documents_service.DocumentsService.start_ixp_extraction] method.
sdk.documents.start_ixp_extraction_async(project_name: str, tag: str, file: Union[IO[bytes], bytes, str, NoneType]=None, file_path: Optional[str]=None) -> uipath.platform.documents.documents.StartExtractionResponse

# Start an IXP extraction validation action without waiting for results (non-blocking).
sdk.documents.start_ixp_extraction_validation(action_title: str, action_priority: <enum 'ActionPriority, action_catalog: str, action_folder: str, storage_bucket_name: str, storage_bucket_directory_path: str, extraction_response: uipath.platform.documents.documents.ExtractionResponseIXP) -> uipath.platform.documents.documents.StartOperationResponse

# Asynchronous version of the [`start_ixp_extraction_validation`][uipath.platform.documents._documents_service.DocumentsService.start_ixp_extraction_validation] method.
sdk.documents.start_ixp_extraction_validation_async(action_title: str, action_priority: <enum 'ActionPriority, action_catalog: str, action_folder: str, storage_bucket_name: str, storage_bucket_directory_path: str, extraction_response: uipath.platform.documents.documents.ExtractionResponseIXP) -> uipath.platform.documents.documents.StartOperationResponse

```

### Entities
Expand Down Expand Up @@ -505,7 +542,7 @@ Llm service

```python
# Generate chat completions using UiPath's normalized LLM Gateway API.
sdk.llm.chat_completions(messages: list[dict[str, str]] | list[tuple[str, str]], model: str="gpt-4o-mini-2024-07-18", max_tokens: int=4096, temperature: float=0, n: int=1, frequency_penalty: float=0, presence_penalty: float=0, top_p: float | None=1, top_k: int | None=None, tools: list[uipath.platform.chat.llm_gateway.ToolDefinition] | None=None, tool_choice: Union[uipath.platform.chat.llm_gateway.AutoToolChoice, uipath.platform.chat.llm_gateway.RequiredToolChoice, uipath.platform.chat.llm_gateway.SpecificToolChoice, Literal['auto', 'none'], NoneType]=None, response_format: dict[str, Any] | type[pydantic.main.BaseModel] | None=None, api_version: str="2024-08-01-preview")
sdk.llm.chat_completions(messages: list[dict[str, str]] | list[tuple[str, str]], model: str="gpt-4.1-mini-2025-04-14", max_tokens: int=4096, temperature: float=0, n: int=1, frequency_penalty: float=0, presence_penalty: float=0, top_p: float | None=1, top_k: int | None=None, tools: list[uipath.platform.chat.llm_gateway.ToolDefinition] | None=None, tool_choice: Union[uipath.platform.chat.llm_gateway.AutoToolChoice, uipath.platform.chat.llm_gateway.RequiredToolChoice, uipath.platform.chat.llm_gateway.SpecificToolChoice, Literal['auto', 'none'], NoneType]=None, response_format: dict[str, Any] | type[pydantic.main.BaseModel] | None=None, api_version: str="2024-08-01-preview")

```

Expand All @@ -515,7 +552,7 @@ Llm Openai service

```python
# Generate chat completions using UiPath's LLM Gateway service.
sdk.llm_openai.chat_completions(messages: list[dict[str, str]], model: str="gpt-4o-mini-2024-07-18", max_tokens: int=4096, temperature: float=0, response_format: dict[str, Any] | type[pydantic.main.BaseModel] | None=None, api_version: str="2024-10-21")
sdk.llm_openai.chat_completions(messages: list[dict[str, str]], model: str="gpt-4.1-mini-2025-04-14", max_tokens: int=4096, temperature: float=0, response_format: dict[str, Any] | type[pydantic.main.BaseModel] | None=None, api_version: str="2024-10-21")

# Generate text embeddings using UiPath's LLM Gateway service.
sdk.llm_openai.embeddings(input: str, embedding_model: str="text-embedding-ada-002", openai_api_version: str="2024-10-21")
Expand Down
Loading
Loading