sonar_reasoning_pro
What it is
A module-level ChatModel instance configured for the Perplexity provider using LangChain’s ChatPerplexity, pre-set to the sonar-reasoning-pro model.
Public API
-
Constants
MODEL_ID: str— Set to"sonar-reasoning-pro".PROVIDER: str— Set to"perplexity".
-
Objects
model: ChatModel— A ready-to-use chat model wrapper:- Underlying model:
langchain_perplexity.ChatPerplexity - Configuration:
model="sonar-reasoning-pro"temperature=0timeout=120api_keyloaded fromABIModule.get_instance().configuration.perplexity_api_key(wrapped inpydantic.SecretStr)
- Underlying model:
Configuration/Dependencies
-
Dependencies
langchain_perplexity.ChatPerplexitynaas_abi_marketplace.ai.perplexity.ABIModulenaas_abi_core.models.Model.ChatModelpydantic.SecretStr
-
Required configuration
ABIModule.get_instance().configuration.perplexity_api_keymust be set (Perplexity API key).
Usage
from naas_abi_marketplace.ai.perplexity.models.sonar_reasoning_pro import model
# Use the configured ChatModel in your application.
# (Exact invocation depends on ChatModel's interface in naas_abi_core.)
print(model.model_id, model.provider)
Caveats
- Importing this module initializes the underlying
ChatPerplexityclient immediately and reads the API key fromABIModuleconfiguration. - Temperature is fixed at
0and timeout at120seconds in this definition.