Architecture FAQ
Design decisions and rationale behind LevitateOS components. Each answer explains why we chose a particular approach over alternatives.
Core Philosophy
What is LevitateOS and why does it exist?
LevitateOS is a daily driver Linux distribution competing directly with Arch Linux. It borrows Arch's philosophy (user maintains their own packages, minimal base that users build up) but substitutes hours of compilation with minutes of extraction from pre-built Rocky Linux packages.
Why not just use Arch?
Arch requires compiling AUR packages. That means build dependencies, compiler toolchains, and waiting. LevitateOS extracts enterprise-tested Rocky RPMs in minutes. Same philosophy, less waiting.
If you want to compile everything yourself, use Arch. If you want the same control without the compilation overhead, use LevitateOS.
Why not Fedora?
Fedora is GNOME-focused and opinionated. It makes choices for you. LevitateOS is a base system - you build what you want on top of it.
Fedora ships a complete desktop experience. LevitateOS ships a foundation. Different goals, different users.
What is LevitateOS NOT?
- An embedded OS (too small, missing capabilities)
- A container base image
- A server-only distro
- A resource-constrained system
LevitateOS targets modern desktop and workstation hardware with full capabilities.
What are the system requirements?
LevitateOS ships with local LLM capabilities. The recommended specs enable running 7B-13B parameter models for AI-assisted workflows. CPU-only inference works with 32GB+ RAM.
Is this just a hobby project?
Boot the ISO. Run the E2E tests. Read the commit history. Code speaks.
The test suite runs full installation workflows in QEMU VMs. The ISO boots on real hardware. Every commit is tested. If something doesn't work, open an issue with specifics.
leviso (ISO Builder)
Why use Rocky Linux 10 instead of building from source?
- Enterprise-grade packages with security patches and stability
- glibc-based (not musl) for maximum compatibility
- No compilation required - binaries are pre-built
- Builds in minutes instead of hours
- Direct RPM extraction using
rpm --root --nodeps
Rocky packages are tested in enterprise deployments. You get RHEL stability without maintaining a build farm. This decision is non-negotiable.
How do I know the RPMs are safe?
Same way you trust any distro - they're GPG-signed by Rocky Linux. Run rpm -K /path/to/package.rpm yourself.
We don't recompile packages. We don't inject code. We don't modify binaries. We extract what Rocky ships and verify signatures match their public keys.
See SUPPLY_CHAIN.md in the repository root for full verification steps.
Why use EROFS instead of a large initramfs?
The old approach loaded a ~250MB initramfs entirely into RAM. The EROFS architecture provides:
- RAM savings: ~400MB → ~50MB at boot
- Single source of truth: Live environment = Installed system (no duplication)
- Simpler installation: Extract to disk instead of complex copy logic
- Scalable: Easy to add packages without bloating initramfs
The tiny initramfs (~5MB) contains just busybox and mount logic. The EROFS image (~350MB) is mounted read-only with a tmpfs overlay for live modifications.
Why UKI (Unified Kernel Images) instead of separate vmlinuz + initramfs?
- Single signed artifact - Kernel, initramfs, and cmdline in one PE binary
- Secure Boot ready - Single file to sign and verify
- Auto-discovery - systemd-boot auto-detects UKIs in /EFI/Linux/
- Simpler installation - Copy one file, no boot entry configuration needed
- Modern standard - Used by Fedora, Arch, and systemd-first distros
Why UEFI-only (no BIOS support)?
LevitateOS targets modern hardware (2013+). UEFI has been standard for over a decade. Dropping BIOS support allows:
- Clean boot stack - systemd-boot + UKI only, no GRUB or isolinux complexity
- Secure Boot path - UKIs are Secure Boot compatible (signing support planned)
- Simpler code - One boot path to test and maintain
If you have hardware from before 2013 that lacks UEFI, LevitateOS is not the right distribution for you.
Why modprobe instead of insmod?
Originally used insmod with manual dependency ordering, which was fragile:
# Before (fragile - order matters!)
load_module mbcache
load_module jbd2 # Must come after mbcache
load_module ext4 # Must come after jbd2
# After (robust - modprobe handles dependencies)
load_module ext4 # modprobe loads mbcache, jbd2 automaticallyBenefits: No manual ordering, kernel updates won't break boot, easy to add new modules.
Why readelf instead of ldd for library detection?
ldd executes binaries with the host dynamic linker - broken for cross-compilation:
// Before (broken for cross-compilation)
let ldd_output = Command::new("ldd").arg(&bin_path).output();
// If host is musl and target is glibc → wrong libraries
// After (works everywhere)
let libs = get_all_dependencies(&ctx.rootfs, &bin_path)?;
// Reads ELF headers directly - no executionreadelf reads ELF NEEDED entries directly from the file. No execution, no dynamic linker involvement. Works for any architecture on any host.
recipe (Package Manager)
Why Rhai instead of Python, Lua, or YAML?
YAML isn't a programming language - you can't write conditionals or loops. Python has dependency hell and isn't embeddable without significant overhead. Lua lacks a standard library. Rhai is Rust-native, sandboxed, and actually programmable.
With Rhai, recipes ARE CODE:
let pkg = #{
name: "ripgrep",
version: "14.1.0",
};
fn install() {
let url = `https://github.com/.../ripgrep-${pkg.version}-x86_64-unknown-linux-musl.tar.gz`;
download(url);
extract("tar.gz");
install_bin("rg");
}- Real programming - Variables, conditionals, loops, functions
- Infinite extensibility - Recipes define logic, not just data
- Minimal implementation - Executor just provides helpers + calls acquire(), build(), install()
- Sandboxed - Runs in isolated Rhai VM
Why does state live in recipe files instead of a database?
State lives IN the recipe file itself:
let installed = false; // Set to true after install
let installed_version = (); // Version that was installed
let installed_at = (); // Unix timestamp
let installed_files = []; // List of installed file paths- No database needed - recipe file IS the source of truth
- Self-contained - Everything about a package in one file
- Easy to sync - Just copy recipe files between machines
- Simple to debug - Edit recipe directly to test
- Self-modifying - Engine updates state variables directly in files
To see what's installed: grep -l "installed = true" recipes/*.rhai
What is the recipe lifecycle?
Engine
├── Context (ExecutionContext)
├── Phases (first-class)
│ ├── acquire() - download, copy, verify
│ ├── build() - extract, compile, configure
│ └── install() - place files in filesystem
└── Utilities (filesystem, io, env, command) Execution order: is_installed() check → acquire() → build() (optional) → install()
How does recipe handle dependencies?
- Source checksums: verify_sha256, verify_sha512, verify_blake3
- Reverse dependency tracking: Find what depends on a package
- Dry run mode:
--dry-runshows what would be installed - Version constraints: Semver support (e.g., "openssl >= 1.1.0")
- Atomic installation: Staging + rollback on failure
- Orphan detection: Finds packages installed only as dependencies
- Lock file: TOML-based lockfile with
--lockedflag
Are recipes trusted code? What about security?
Yes, recipes are trusted code. They can execute shell commands, download files, and modify your system. This is intentional - recipes need full access to install software.
- Recipes are code, not config - they can do anything
- Only use recipes from official LevitateOS sources
- You write your own recipes - you control what they do
- Never run recipes downloaded from untrusted sources
This is the same trust model as Arch's PKGBUILD or Gentoo's ebuilds. The user is responsible for reviewing what they install. LevitateOS doesn't sandbox recipes because that would break legitimate functionality.
Testing Philosophy
Why end-to-end (E2E) testing instead of unit tests?
LevitateOS uses 6-phase E2E testing to validate the full installation flow:
Unit tests can't catch system integration issues. Installation is inherently end-to-end. Tests run in real QEMU VM with real hardware simulation.
How does LevitateOS prevent test cheating?
Created cheat-test proc-macro to prevent false-positive tests:
#[cheat_aware(
protects = "users can boot installed system",
severity = "CRITICAL",
cheats = ["skip verification", "hardcode paths", "ignore errors"],
consequence = "Users get broken system, can't boot"
)]
fn test_bootloader_install() { ... }Key principle: If users need it → test fails when missing. No "optional" trash bin. Never move missing items from CRITICAL to OPTIONAL just to pass tests.
Lessons Learned
What is "STOP. READ. THEN ACT"?
The most important rule in the codebase:
- STOP - Don't assume you know where something goes
- READ - Read what already exists first
- ACT - Then make informed decisions
This exists because a previous development session broke tests, created code in the wrong location, and deleted it without checking - wasting significant time and resources. Five minutes reading saves hours of cleanup.
What happened with the bootstrap tarball approach?
A costly mistake. The original plan:
leviso → bootstrap.tar.xz (~5MB) → Extract → recipe install base → UpdatesWhat was built: 34 recipe files, bootstrap module, busybox download, static binary compilation - all unnecessary.
The simple solution: The live ISO already contains a complete system in an EROFS image. Just extract it:
Boot ISO → Mount /mnt → recstrap /mnt (extracts EROFS) → DoneLesson: Always ask "is this necessary?" before building. The simplest solution is usually correct.
How should architecture decisions be made?
Ask before implementing. Don't silently add workarounds.
- Document the problem
- Propose solution in a team file
- Ask for feedback
- Don't silently implement
Silent implementation can waste resources, add complexity, and create technical debt.
How does LevitateOS stay aligned with Arch's approach?
When unsure about UX decisions, ask "What does archiso do?"
- Autologin: archiso has autologin → LevitateOS has autologin
- Root shell: archiso boots to root shell → LevitateOS boots to root shell
- Installer: archiso requires manual install → LevitateOS uses recstrap (like pacstrap) + manual config
LevitateOS competes with Arch. The live ISO experience should match archiso's behavior where applicable.
Technical Decisions Summary
| Decision | Alternative | Why Chosen |
|---|---|---|
| Rocky Linux 10 | Compile from source | Minutes vs hours, stable packages |
| EROFS | Large initramfs | RAM savings, scalability, single source |
| Rhai recipes | S-expressions/YAML | Real programming, infinite extensibility |
| State in files | Database | Self-contained, easy to sync, simple |
| E2E tests | Unit tests | Catches integration issues |
| modprobe | insmod | No manual ordering, robust |
| readelf | ldd | Cross-compilation safe |
| TEAM files | Comments in code | Historical record, decision rationale |