SDK Documentation

AKIOUD AI Safety SDK

Comprehensive documentation for implementing robust AI safety measures with our enterprise SDK

Introduction

The Akioudai Safety SDK provides enterprise-grade tools for implementing robust AI safety measures for machine learning models. Our SDK helps you build safer AI systems by providing tools for alignment, monitoring, validation, and deployment.

Agent Alignment

Fine-tune LLMs with robust safety and alignment techniques to ensure reliable and appropriate behavior.

Safety Guardrails

Implement robust safety boundaries and monitoring to prevent misuse and unsafe outputs.

Runtime Validation

Validate model behavior during execution to ensure compliance with safety parameters.

Behavioral Testing

Comprehensive testing of model behaviors across a wide range of scenarios and edge cases.

Deployment Safety

Secure deployment with continuous monitoring and safety checks throughout the model lifecycle.

Installation

The Akioudai Safety SDK is available as a private npm package. To request access, please contact our sales team or use the request access button below.

Prerequisites

  • Node.js 14 or higher
  • An active Akioudai subscription
  • API credentials from your dashboard

Once you have access, install the SDK using npm or yarn:

Installation
// Using npm
npm install @akioudai/safety-sdk

// Using yarn
yarn add @akioudai/safety-sdk

Quick Start

Get started quickly with the Safety SDK by implementing basic safety guardrails for your AI model:

Basic Usage
import { SafetyGuard } from '@akioudai/safety-sdk'

// Initialize with API key from your dashboard
const guard = new SafetyGuard({
  apiKey: 'your_api_key_here',
  boundaries: ['ethics', 'bias', 'toxicity'],
  threshold: 0.95
});

// Monitor model outputs
guard.monitor(modelOutput, {
  realtime: true,
  callbacks: {
    onViolation: (violation) => {
      console.log('Safety violation detected:', violation)
    }
  }
});

This example initializes a SafetyGuard instance and monitors model outputs for potential safety violations in real-time. When a violation is detected, the callback function is triggered, allowing you to take appropriate action.

Usage with Agent Alignment

For more advanced use cases, you can use the AgentAlignment module to fine-tune LLMs with robust safety and alignment techniques:

Agent Alignment Example
import { AgentAlignment } from '@akioudai/safety-sdk'

const aligner = new AgentAlignment({
  objectives: ['task_completion', 'safety', 'truthfulness'],
  frameworks: ['hipaa', 'gdpr', 'nist'],
  threshold: 0.98
});

// Apply alignment to agent model
aligner.alignAgent(agentModel, {
  realtime: true,
  callbacks: {
    onDrift: (violation) => {
      console.log('Alignment drift detected:', violation)
    }
  }
});

Authentication

The SDK requires API credentials to authenticate with the Akioudai platform. You can obtain these credentials from your dashboard.

Important Security Note

Never hardcode your API credentials in your source code or commit them to version control. Use environment variables or a secure configuration management system.

Setting up Environment Variables

We recommend using environment variables to securely store your API credentials:

Environment Variables
// .env file
AKIOUDAI_API_KEY=your_api_key_here
AKIOUDAI_API_SECRET=your_api_secret_here

// In your code
import { SafetySDK } from '@akioudai/safety-sdk'

const sdk = new SafetySDK({
  apiKey: process.env.AKIOUDAI_API_KEY,
  apiSecret: process.env.AKIOUDAI_API_SECRET
});

Agent Alignment

The Agent Alignment module provides tools to align AI agents with specific objectives, safety guidelines, and regulatory frameworks.