Addis AI

Server-side Integration

Build secure backend integrations using Node.js, Python, Go, and PHP.

Server-side integration is the only secure way to use Addis AI in production. By routing requests through your own server, you keep your API keys hidden from users and gain control over rate limiting and logging.

Architecture Overview

Your server acts as a secure Middleman. The client talks to your server, and your server talks to Addis AI.

Client App
Sends Data
Secure Zone
Your Backend
+ Adds API Key
Addis AI
Addis AI
Waiting for user input...

Client Request: The user's device sends a simple payload (e.g., { "message": "Hello" }) to your backend. It does not send the API key.

Server Authentication: Your server receives the request, validates the user's session, and injects the X-API-Key header from your server-side environment variables.

AI Processing: Your server forwards the authenticated request to Addis AI. The AI processes it and sends the response back to your server.

Response: Your server sends the final answer back to the client app.


Why Server-Side?

Direct client-side calls (from a browser or mobile app) exposes your credentials. This approach offers critical advantages:

Security

Keep your API keys secure on your server. Never expose sk_ keys in frontend code.

CORS & Network

Avoid Cross-Origin (CORS) errors that occur when calling APIs directly from a browser.

Request Validation

Validate and sanitize user inputs on your backend before they ever reach the Addis AI API.

Caching & Cost

Cache common responses (e.g. FAQs) in Redis/Database to reduce API calls and save money.


Security Essentials

Why a Proxy?

If you put your API key in a mobile app or website, anyone can find it. A server-side proxy injects the key securely away from the client's eyes.


Integration Patterns

Choose the pattern that matches your data requirements.

Standard Implementation (Text)

Use this for chatbots, translation, or simple text generation where waiting 2-3 seconds for a response is acceptable.

const express = require('express');
const fetch = require('node-fetch'); // Native in Node 18+
require('dotenv').config();

const app = express();
app.use(express.json());

app.post('/api/chat', async (req, res) => {
  const { message } = req.body;

  try {
    const response = await fetch('https://api.addisassistant.com/api/v1/chat_generate', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-API-Key': process.env.ADDIS_AI_KEY
      },
      body: JSON.stringify({
        model: "Addis-፩-አሌፍ",
        prompt: message,
        target_language: "am",
        generation_config: { temperature: 0.7 }
      })
    });

    const data = await response.json();
    res.status(response.status).json(data);

  } catch (error) {
    res.status(500).json({ error: "Internal Server Error" });
  }
});

app.listen(3000, () => console.log('Server running on port 3000'));
import os
import httpx
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel

app = FastAPI()
ADDIS_KEY = os.getenv("ADDIS_AI_KEY")

class ChatRequest(BaseModel):
    message: str

@app.post("/api/chat")
async def chat_handler(req: ChatRequest):
    url = "https://api.addisassistant.com/api/v1/chat_generate"
    payload = {
        "model": "Addis-፩-አሌፍ",
        "prompt": req.message,
        "target_language": "am"
    }
    headers = {"X-API-Key": ADDIS_KEY}

    async with httpx.AsyncClient() as client:
        resp = await client.post(url, json=payload, headers=headers)
        return resp.json()
package main

import (
	"bytes"
	"encoding/json"
	"net/http"
	"os"
	"github.com/gin-gonic/gin"
)

func main() {
	r := gin.Default()
	apiKey := os.Getenv("ADDIS_AI_KEY")

	r.POST("/api/chat", func(c *gin.Context) {
		var input struct {
			Message string `json:"message"`
		}
		if err := c.BindJSON(&input); err != nil { return }

		payload := map[string]interface{}{
			"model": "Addis-፩-አሌፍ",
			"prompt": input.Message,
			"target_language": "am",
		}
		jsonValue, _ := json.Marshal(payload)

		req, _ := http.NewRequest("POST", "https://api.addisassistant.com/api/v1/chat_generate", bytes.NewBuffer(jsonValue))
		req.Header.Set("Content-Type", "application/json")
		req.Header.Set("X-API-Key", apiKey)

		client := &http.Client{}
		resp, err := client.Do(req)
        // Error handling omitted for brevity
		defer resp.Body.Close()

		var result map[string]interface{}
		json.NewDecoder(resp.Body).Decode(&result)
		c.JSON(resp.StatusCode, result)
	})
	r.Run()
}
use Illuminate\Support\Facades\Http;
use Illuminate\Http\Request;

Route::post('/api/chat', function (Request $request) {
    $response = Http::withHeaders([
        'X-API-Key' => env('ADDIS_AI_KEY'),
        'Content-Type' => 'application/json',
    ])->post('https://api.addisassistant.com/api/v1/chat_generate', [
        'model' => 'Addis-፩-አሌፍ',
        'prompt' => $request->input('message'),
        'target_language' => 'am'
    ]);

    return response()->json($response->json(), $response->status());
});

Streaming Implementation

When using stream: true, the API sends data in chunks. Your server must pipe these chunks to the client immediately to avoid latency.

app.post('/api/chat/stream', async (req, res) => {
  const { message } = req.body;

  try {
    const response = await fetch('https://api.addisassistant.com/api/v1/chat_generate', {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-API-Key': process.env.ADDIS_AI_KEY
      },
      body: JSON.stringify({
        prompt: message,
        target_language: "am",
        generation_config: { stream: true } // Enable streaming
      })
    });

    // Set headers for streaming
    res.setHeader('Content-Type', 'application/x-ndjson');
    res.setHeader('Transfer-Encoding', 'chunked');

    // Pipe the AI stream directly to the client response
    response.body.pipe(res);

  } catch (error) {
    res.status(500).end();
  }
});
from fastapi.responses import StreamingResponse

@app.post("/api/chat/stream")
async def chat_stream(req: ChatRequest):
    url = "https://api.addisassistant.com/api/v1/chat_generate"
    payload = {
        "prompt": req.message,
        "target_language": "am",
        "generation_config": {"stream": True}
    }
    headers = {"X-API-Key": ADDIS_KEY}

    async def iter_stream():
        async with httpx.AsyncClient() as client:
            async with client.stream("POST", url, json=payload, headers=headers) as r:
                async for chunk in r.aiter_bytes():
                    yield chunk

    return StreamingResponse(iter_stream(), media_type="application/x-ndjson")

File Uploads (Multipart)

When using the Vision features (images) with the chat endpoint, your server must handle the file upload and forward it as multipart/form-data.

Node.js Example (using multer):

const multer = require('multer');
const FormData = require('form-data');
const upload = multer(); // Memory storage

// Route: /api/upload
app.post('/api/upload', upload.single('image'), async (req, res) => {
  try {
    const formData = new FormData();
    
    // 1. Append the file buffer from the request
    formData.append('image', req.file.buffer, {
      filename: req.file.originalname,
      contentType: req.file.mimetype
    });

    // 2. Append the JSON metadata
    formData.append('request_data', JSON.stringify({
      prompt: "Describe this image",
      target_language: "am"
    }));

    // 3. Send to Addis AI Chat Endpoint
    const response = await fetch("https://api.addisassistant.com/api/v1/chat_generate", {
      method: "POST",
      headers: {
        "X-API-Key": process.env.ADDIS_AI_KEY,
        ...formData.getHeaders() // Important: Sets boundary headers
      },
      body: formData
    });

    const data = await response.json();
    res.json(data);

  } catch (error) {
    res.status(500).json({ error: "Upload failed" });
  }
});

Production Readiness

Moving from localhost to production requires handling security, scalability, and deployment.

Security & Architecture

API Key Security

Environment Variables: Store keys in ENV vars, never in code.

Access Control: Limit which servers/processes can access the keys and rotate them regularly.

Input Validation

Sanitization: Validate all user inputs on your backend to prevent malicious prompts or excessively long inputs before forwarding.

Rate Limiting

Throttling: Implement per-user rate limiting (e.g., using Redis) to prevent abuse and control your billing costs.

Error Handling

Graceful Failures: Log errors internally for debugging but return generic, safe error messages to the client to avoid leaking stack traces.

Deployment & Containerization

To scale your integration, use horizontal scaling and containerization.

Example Dockerfile for Node.js Proxy:

Dockerfile
FROM node:18-alpine

WORKDIR /app

# Install dependencies first (caching)
COPY package*.json ./
RUN npm install --production

# Copy source code
COPY . .

# Environment variables should be injected at runtime, not build time
ENV PORT=3000
EXPOSE 3000

CMD ["node", "app.js"]

Best Practices Checklist

Go-Live Checklist

  • Security: API Key is stored in ENV, not Git.
  • Validation: Inputs are checked for length and content.
  • Auth: Your proxy endpoint is protected (JWT/Session).
  • Reliability: Timeout is set to 60s+ to handle AI generation time.
  • HTTPS: All traffic is encrypted via TLS/SSL.

Next Steps

Now that your server is secure, expand your integration:

On this page