Skip to main content
GET
/
api
/
news
/
v1
/
summarize-article-cache
GetSummarizeArticleCache
curl --request GET \
  --url https://api.example.com/api/news/v1/summarize-article-cache
{
  "summary": "<string>",
  "model": "<string>",
  "provider": "<string>",
  "tokens": 123,
  "fallback": true,
  "error": "<string>",
  "errorType": "<string>",
  "status": "SUMMARIZE_STATUS_UNSPECIFIED",
  "statusDetail": "<string>"
}

Query Parameters

cache_key
string

Deterministic cache key computed by buildSummaryCacheKey().

Response

Successful response

SummarizeArticleResponse contains the LLM summarization result.

summary
string

The generated summary text.

model
string

Model identifier used for generation.

provider
string

Provider that produced the result (or "cache").

tokens
integer<int32>

Token count from the LLM response.

fallback
boolean

Whether the client should try the next provider in the fallback chain.

error
string

Error message if the request failed.

errorType
string

Error type/name (e.g. "TypeError").

status
enum<string>

SummarizeStatus indicates the outcome of a summarization request.

Available options:
SUMMARIZE_STATUS_UNSPECIFIED,
SUMMARIZE_STATUS_SUCCESS,
SUMMARIZE_STATUS_CACHED,
SUMMARIZE_STATUS_SKIPPED,
SUMMARIZE_STATUS_ERROR
statusDetail
string

Human-readable detail for non-success statuses (skip reason, etc.).