cache_backend
Specify a named cache storage backend.
Syntax
result = Module(args) with cache: <duration>, cache_backend: "<name>"
Type: String (quoted backend name)
Description
The cache_backend option selects a specific cache storage backend by name. This allows different modules to use different caching strategies (in-memory, Memcached, Redis, etc.) based on their requirements.
This option requires cache to be specified. Without cache, specifying a backend generates a compiler warning.
In multi-instance deployments, use memcached or a custom Redis backend so all instances share the same cache. The default memory backend is per-instance and causes cache misses when requests hit different servers.
If the named backend is not registered, the runtime creates a new in-memory cache as a fallback. Check your startup logs to ensure the expected backend is loaded.
Examples
Memcached Backend
session = LoadSession(token) with cache: 1h, cache_backend: "memcached"
Use a distributed Memcached cache for session data.
In-Memory Backend (Explicit)
config = GetConfig(key) with cache: 5min, cache_backend: "memory"
Explicitly use the default in-memory cache.
Redis Backend
lookup = ExpensiveLookup(id) with cache: 30min, cache_backend: "redis"
Use a Redis backend for shared caching across instances.
Available Backends
| Name | Source | Description | Use Case |
|---|---|---|---|
memory | Built-in | In-memory with TTL + LRU eviction (default) | Development, single instance |
memcached | Optional module | Distributed Memcached via spymemcached | High-throughput distributed caching |
redis | Custom SPI | Distributed Redis (implement yourself) | Multi-instance with rich data structures |
caffeine | Custom SPI | High-performance local cache (implement yourself) | Production single instance |
The memory backend ships with the core runtime. The memcached backend is available as a first-party optional module. For Redis and Caffeine, implement the CacheBackend SPI — see the integration guide for complete examples.
Backend Configuration
Backends are registered at application startup via CacheRegistry:
import io.constellation.cache.{CacheRegistry, InMemoryCacheBackend}
import io.constellation.cache.memcached.{MemcachedCacheBackend, MemcachedConfig}
MemcachedCacheBackend.resource(MemcachedConfig.single()).use { memcached =>
for {
registry <- CacheRegistry.withBackends(
"memory" -> InMemoryCacheBackend(),
"memcached" -> memcached
)
// constellation-lang programs can now use:
// cache_backend: "memory"
// cache_backend: "memcached"
} yield ()
}
You can also set a global default via ConstellationBuilder.withCache():
ConstellationImpl.builder()
.withCache(memcachedBackend) // All modules use Memcached by default
.build()
Behavior
- Look up the named backend in the cache registry
- If found, use that backend for cache operations
- If not found, create a new
InMemoryCacheBackendas fallback - Proceed with normal cache behavior (check, store, return)
Related Options
- cache — Required to enable caching with TTL
Related Pages
- CacheBackend SPI — Implement a custom backend
- Memcached Module — First-party Memcached backend
- Optional Modules — All available first-party modules
Diagnostics
| Warning | Cause |
|---|---|
cache_backend without cache | Backend requires the cache option to be set |
Best Practices
- Use
memoryfor local, single-instance caches during development - Use
memcachedor a custom Redis backend for distributed production deployments - Configure backends at application startup before running any pipelines
- Match backend choice to data consistency and latency requirements
- Use
keyPrefix(Memcached) or key namespacing to isolate tenants sharing a cache cluster