Files
macha-autonomous/README.md
Lily Miller 22ba493d9e Initial commit: Split Macha autonomous system into separate flake
Macha is now a standalone NixOS flake that can be imported into other
systems. This provides:

- Independent versioning
- Easier reusability
- Cleaner separation of concerns
- Better development workflow

Includes:
- Complete autonomous system code
- NixOS module with full configuration options
- Queue-based architecture with priority system
- Chunked map-reduce for large outputs
- ChromaDB knowledge base
- Tool calling system
- Multi-host SSH management
- Gotify notification integration

All capabilities from DESIGN.md are preserved.
2025-10-06 14:32:37 -06:00

94 lines
2.7 KiB
Markdown

# Macha - AI-Powered Autonomous System Administrator
Macha is an AI-powered autonomous system administrator for NixOS that monitors system health, diagnoses issues, and can take corrective actions with appropriate approval workflows.
## Features
- **Autonomous Monitoring**: Continuous health checks with configurable intervals
- **Multi-Host Management**: SSH-based management of multiple NixOS hosts
- **Tool Calling**: Comprehensive system administration tools via Ollama LLM
- **Queue-Based Architecture**: Serialized LLM requests to prevent resource contention
- **Knowledge Base**: ChromaDB-backed learning system for operational wisdom
- **Approval Workflows**: Safety-first approach with configurable autonomy levels
- **Notification System**: Gotify integration for alerts
## Quick Start
### As a NixOS Flake Input
Add to your `flake.nix`:
```nix
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
macha-autonomous.url = "git+https://git.coven.systems/lily/macha-autonomous";
};
outputs = { self, nixpkgs, macha-autonomous }: {
nixosConfigurations.yourhost = nixpkgs.lib.nixosSystem {
modules = [
macha-autonomous.nixosModules.default
{
services.macha-autonomous = {
enable = true;
autonomyLevel = "suggest"; # observe, suggest, auto-safe, auto-full
checkInterval = 300;
ollamaHost = "http://localhost:11434";
model = "gpt-oss:latest";
};
}
];
};
};
}
```
## Configuration Options
See `module.nix` for full configuration options including:
- Autonomy levels (observe, suggest, auto-safe, auto-full)
- Check intervals
- Ollama host and model settings
- Git repository monitoring
- Service user/group configuration
## CLI Tools
- `macha-chat` - Interactive chat interface
- `macha-ask` - Single-question interface
- `macha-check` - Trigger immediate health check
- `macha-approve` - Approve pending actions
- `macha-logs` - View service logs
- `macha-issues` - Query issue database
- `macha-knowledge` - Query knowledge base
- `macha-systems` - List managed systems
- `macha-notify` - Send Gotify notification
## Architecture
- **Agent**: Core AI logic with tool calling
- **Orchestrator**: Main monitoring loop
- **Executor**: Safe action execution
- **Queue System**: Serialized Ollama requests with priorities
- **Context DB**: ChromaDB for system context and learning
- **Tools**: System administration capabilities
## Requirements
- NixOS with flakes enabled
- Ollama service running
- Python 3 with requests, psutil, chromadb
## Documentation
See `DESIGN.md` for comprehensive architecture documentation.
## License
[Add your license here]
## Author
Lily Miller