OpenRouterAPIIntegration
What it is
- A small integration client for the OpenRouter API.
- Wraps several OpenRouter endpoints (models, providers, credits, analytics, beta responses).
- Includes:
- Request handling with bearer auth
- A 1-day filesystem cache for API requests
- Optional JSON persistence of model lists to object storage (via
StorageUtils)
Public API
OpenRouterAPIIntegrationConfiguration
Dataclass configuration for the integration.
- Fields
api_key: str— OpenRouter API key (Bearer token)object_storage: ObjectStorageService— backing storage used byStorageUtilsfor JSON savingbase_url: str = "https://openrouter.ai/api/v1"— API base URLdatastore_path: str = "openrouter"— base path used when saving JSON (e.g., models)
OpenRouterAPIIntegration
Integration client.
-
create_response(input_prompt: str, tools: Optional[list[Dict]] = None, model: str = "openai/gpt-4.1-mini", temperature: float = 0.7, top_p: float = 0.9) -> Dict- POST
/responses - Sends a beta “responses” payload with a single user message.
- Accepts optional
toolslist.
- POST
-
get_user_activity(date: Optional[str] = None) -> Dict- GET
/activity - Query param:
date(YYYY-MM-DD, last 30 days)
- GET
-
get_remaining_credits() -> Dict- GET
/credits
- GET
-
get_total_models_count() -> Dict- GET
/models/count
- GET
-
list_models(params: Optional[Dict] = None, save_json: bool = True) -> List- GET
/models - Returns the unwrapped list from
response["data"](or[]). - If
save_json=True, saves:- All models to:
{datastore_path}/models/_all/models.json - Models split by owner prefix (before
/in model id) to:{datastore_path}/models/{owner}/models.json
- All models to:
- GET
-
get_model_parameters(author: str, slug: str) -> Dict- GET
/parameterswithauthorandslugquery params
- GET
-
list_providers() -> Dict- GET
/providers
- GET
-
list_api_keys() -> Dict- GET
/keys
- GET
-
get_current_api_key() -> Dict- GET
/key
- GET
as_tools(configuration: OpenRouterAPIIntegrationConfiguration)
Builds LangChain tools from this integration.
- Returns a list of
langchain_core.tools.StructuredTool:openrouter_list_models→ callsintegration.list_models()openrouter_list_providers→ callsintegration.list_providers()
Configuration/Dependencies
- HTTP:
requests - Core integration types:
naas_abi_core.integration.integration- Uses
IntegrationConnectionErroron request failures.
- Uses
- Caching:
- Uses
CacheFactory.CacheFS_find_storage(subpath="openrouter") _make_requestis cached for 1 day with cache key based on method/endpoint/params.
- Uses
- Object storage:
object_storage: ObjectStorageServiceis required by configuration.- Used by
StorageUtils.save_json(...)whenlist_models(save_json=True).
Usage
from naas_abi_marketplace.applications.openrouter.integrations.OpenRouterAPIIntegration import (
OpenRouterAPIIntegration,
OpenRouterAPIIntegrationConfiguration,
)
# Provide a concrete ObjectStorageService implementation from your environment
object_storage = ... # ObjectStorageService
config = OpenRouterAPIIntegrationConfiguration(
api_key="YOUR_OPENROUTER_API_KEY",
object_storage=object_storage,
)
client = OpenRouterAPIIntegration(config)
# Models
models = client.list_models(save_json=False)
print(len(models))
# Providers
providers = client.list_providers()
print(providers)
# Credits
credits = client.get_remaining_credits()
print(credits)
# Beta responses
resp = client.create_response("Say hello in one sentence.")
print(resp)
Caveats
_make_requestcaches responses for 1 day; repeated calls with the same method/endpoint/params may return cached data.list_models(save_json=True)writes JSON viaStorageUtilsto paths derived fromdatastore_path; ensureobject_storageis correctly configured.- Cache key uses only
method,endpoint, andparams(not request body); POST calls with different payloads but same endpoint may collide in cache.