Skip to content

Features

AI Controller offers a comprehensive set of features designed to enhance your organization's use of Large Language Models (LLMs) while maintaining security, control, and visibility.

For a deeper understanding of how these features work together, see the Concepts section, which covers the underlying architecture, data flow, security model, and governance framework.

Core Features

AI Controller provides several key capabilities to help you manage LLM usage in your organization, listed in alphabetical order:

Access Control

AI Controller implements a multi-layered access control system that provides fine-grained control over administrative functions, LLM access, feature access, and data access:

  • Role-based access control for administrative functions
  • User and group management
  • Permission management for different system capabilities
  • Integration with the Rules Engine for request-level access control

Learn more about Access Control

API Key Management

AI Controller provides centralized management for two types of API keys:

  • Provider API Keys: Secure credentials for accessing external LLM services
  • AI Controller API Keys: Application-specific keys for authenticating with the AI Controller platform

This dual-key architecture creates a secure abstraction layer that enhances security, simplifies management, and provides granular control over your LLM interactions.

Learn more about API Key Management

Cost Management

AI Controller helps you control and optimize your LLM expenditures through:

  • Usage tracking and detailed monitoring of requests
  • Basic usage metrics including number of requests, providers, models, and request lengths
  • Integration with caching and rules to enforce cost-efficient policies

Learn more about Cost Management

Logging and Monitoring

AI Controller captures comprehensive logs and provides monitoring capabilities for system activities:

  • Track all LLM requests and responses
  • Access logs through web interface or API
  • View system operations and performance metrics
  • Support for security auditing, performance optimization, and troubleshooting

Learn more about Logging and Monitoring

Performance Optimization

AI Controller includes features to optimize performance and reduce latency:

  • Response caching for faster results
  • Configuration guidance for different scenarios
  • Database and network optimization options
  • Performance monitoring and troubleshooting tools

Learn more about Performance Optimization

Response Caching

AI Controller's caching system helps optimize costs and performance:

  • Store responses from LLM providers for identical requests
  • Dramatically reduce costs and improve performance
  • Ensure consistency in LLM responses
  • Enhance system reliability during provider outages

Learn more about Response Caching

Rules Engine

The Rules Engine allows administrators to create and enforce rules governing LLM interactions:

  • Fine-grained control over model access by user or group
  • Define allowed providers
  • Implement regex-based model filtering
  • Create complex access policies
  • Evaluate rules in real-time for each request

Learn more about the Rules Engine

Prompt an LLM

The "Prompt an LLM" feature provides a direct interface to test your configured language models and verify API key functionality:

  • Quickly verify that provider API keys are working correctly
  • Test specific prompts against configured language models
  • View raw API responses from language model providers
  • Troubleshoot connectivity or configuration issues

Learn more about Prompt an LLM


Updated: 2025-05-15