Build on grammar and Shenute AI APIs
The grammar API exposes a read-only, versioned dataset for lessons, concepts, examples, exercises, footnotes, and sources, while /api/shenute powers Shenute AI interactions with provider selection and OCR-backed image context.
Start here
Most integrations should begin with the API index, which documents the available resource families and the current dataset version.
Typical workflow
- Call /api/v1/grammar to discover the current endpoints and dataset version.
- Fetch /api/v1/grammar/lessons for the published lesson index.
- Load /api/v1/grammar/lessons/[slug] for full lesson payloads.
- Use /api/openapi.json when generating clients or importing the schema into tooling.
- Send POST /api/shenute requests for Shenute AI responses (default provider: openrouter).
- Send image OCR requests to POST /api/ocr so Coptic Compass forwards them to OCR_SERVICE_URL.
High-value endpoints
Developers
/api/v1/grammar
Discovery index for the public grammar API.
/api/v1/grammar/lessons?status=published
Published lesson index for public integrations.
/api/v1/grammar/manifest
Manifest with dataset-level metadata and counts.
/api/openapi.json
Machine-readable OpenAPI document.
/api/shenute
Shenute AI endpoint with provider routing and fallback handling.
/api/ocr
OCR proxy endpoint that forwards image uploads to OCR_SERVICE_URL.
Integration notes
- Responses are read-only and versioned with schemaVersion, datasetVersion, and generatedAt metadata.
- The public dataset only exposes published lessons and their related concepts, examples, exercises, footnotes, and sources.
- The lesson filter accepts either a lesson slug or a canonical lesson id.
- For browser apps on another origin, a backend proxy is the safest default.
- Shenute AI supports provider values: openrouter, gemini, and hf.
- Image upload and camera capture flows run OCR first and append extracted text under [Image OCR Context] before calling /api/shenute.
- Set OCR_SERVICE_URL and optionally OCR_UPLOAD_FIELD when your OCR backend requires a specific multipart field name.
- The /api/ocr endpoint proxies multipart OCR uploads and returns upstream OCR responses to the client.
Example request
A minimal server-side fetch that lists published lesson titles.
const response = await fetch(
"https://kyrilloswannes.com/api/v1/grammar/lessons",
);
const payload = await response.json();
const lessonTitles = payload.data.map((lesson) => lesson.title.en);Related resources
Swagger UI
Interactive reference for exploring every endpoint.
OpenAPI JSON
Import into Postman, SDK generators, or internal tooling.
API index
Read the current API capabilities and example routes.
Grammar hub
See the public content the API is exposing.
Shenute AI
Reference UI for provider selection plus OCR-backed image and camera messaging.
OCR proxy endpoint
Send multipart OCR requests without exposing your upstream OCR service URL.
Shenute AI request example
A minimal POST request to /api/shenute using OpenRouter as provider.
const response = await fetch("https://kyrilloswannes.com/api/shenute", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
inferenceProvider: "openrouter",
messages: [
{
id: "u1",
role: "user",
parts: [{ type: "text", text: "Translate this Coptic sentence." }],
},
],
}),
});
const streamOrText = await response.text();OCR integration notes
Clients can call /api/ocr, and Coptic Compass forwards to OCR_SERVICE_URL then returns the upstream OCR response.
# .env.local
OCR_SERVICE_URL=https://your-ocr-service/upload
# Optional for strict OCR backends:
OCR_UPLOAD_FIELD=file
curl -X POST "https://kyrilloswannes.com/api/ocr?lang=cop" -F "file=@/path/to/coptic-image.jpg"
# Proxy OCR flow
# 1) client POSTs to /api/ocr
# 2) server forwards to OCR_SERVICE_URL
# 3) upstream OCR response is returned to the client