Models
Islamic Embeddings
Convert Arabic Islamic text into semantic vector embeddings.
Overview
The Kawn Embeddings API converts text into numerical vectors that represent its semantic meaning. It provides a single, streamlined endpoint for generating embeddings.
The flagship model — tbyaan/islamic-embedding-tbyaan-v1 — is fine-tuned on Arabic Islamic data including Hadiths, Fatawy (legal rulings), and classical Islamic books. It understands nuanced Sharia and Fiqh terminology, making it ideal for:
- Semantic Search — Find documents that mean the same thing as a query even without shared keywords.
- RAG (Retrieval Augmented Generation) — Retrieve relevant context for LLMs.
- Clustering — Group similar texts by meaning.
Endpoint
POST /v1/embeddingsGenerate embeddings for text input. This endpoint is OpenAI-compatible — it uses the same request and response format as the OpenAI Embeddings API, making it easy to integrate with existing tooling built for OpenAI.
Requires an API Key passed via the x-api-key header.
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
model | string | Yes | The embedding model to use. See Models. |
input | string | string[] | number[] | number[][] | Yes | Text or token IDs to embed. |
Example Request
curl -X POST https://api-dev.kawn.io/v1/embeddings \
-H "x-api-key: <YOUR_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"model": "tbyaan/islamic-embedding-tbyaan-v1",
"input": "ما حكم الصلاة في الأوقات المنهي عنها؟"
}'Models
| Model ID | Provider | Description |
|---|---|---|
tbyaan/islamic-embedding-tbyaan-v1 | Tbyaan | Fine-tuned on Arabic Islamic corpus (Hadiths, Fatawy, classical books). Recommended for Islamic domain tasks. |
Response
{
"data": {
"object": "embedding",
"index": 0,
"embedding": [0.021, -0.003, 0.118, "..."]
},
"model": "tbyaan/islamic-embedding-tbyaan-v1",
"usage": {
"promptTokens": 12,
"totalTokens": 12
}
}| Field | Type | Description |
|---|---|---|
data.embedding | number[] | The vector representation of the input. |
data.object | string | Always "embedding". |
data.index | number | Position of this embedding in a batch request. |
model | string | The model used to generate the embedding. |
usage.promptTokens | number | Number of tokens in the input. |
usage.totalTokens | number | Total tokens processed. |