Skip to main content

cache_backend

Specify a named cache storage backend.

Syntax

result = Module(args) with cache: <duration>, cache_backend: "<name>"

Type: String (quoted backend name)

Description

The cache_backend option selects a specific cache storage backend by name. This allows different modules to use different caching strategies (in-memory, Memcached, Redis, etc.) based on their requirements.

This option requires cache to be specified. Without cache, specifying a backend generates a compiler warning.

Use distributed caching in production

In multi-instance deployments, use memcached or a custom Redis backend so all instances share the same cache. The default memory backend is per-instance and causes cache misses when requests hit different servers.

Fallback behavior

If the named backend is not registered, the runtime creates a new in-memory cache as a fallback. Check your startup logs to ensure the expected backend is loaded.

Examples

Memcached Backend

session = LoadSession(token) with cache: 1h, cache_backend: "memcached"

Use a distributed Memcached cache for session data.

In-Memory Backend (Explicit)

config = GetConfig(key) with cache: 5min, cache_backend: "memory"

Explicitly use the default in-memory cache.

Redis Backend

lookup = ExpensiveLookup(id) with cache: 30min, cache_backend: "redis"

Use a Redis backend for shared caching across instances.

Available Backends

NameSourceDescriptionUse Case
memoryBuilt-inIn-memory with TTL + LRU eviction (default)Development, single instance
memcachedOptional moduleDistributed Memcached via spymemcachedHigh-throughput distributed caching
redisCustom SPIDistributed Redis (implement yourself)Multi-instance with rich data structures
caffeineCustom SPIHigh-performance local cache (implement yourself)Production single instance

The memory backend ships with the core runtime. The memcached backend is available as a first-party optional module. For Redis and Caffeine, implement the CacheBackend SPI — see the integration guide for complete examples.

Backend Configuration

Backends are registered at application startup via CacheRegistry:

import io.constellation.cache.{CacheRegistry, InMemoryCacheBackend}
import io.constellation.cache.memcached.{MemcachedCacheBackend, MemcachedConfig}

MemcachedCacheBackend.resource(MemcachedConfig.single()).use { memcached =>
for {
registry <- CacheRegistry.withBackends(
"memory" -> InMemoryCacheBackend(),
"memcached" -> memcached
)
// constellation-lang programs can now use:
// cache_backend: "memory"
// cache_backend: "memcached"
} yield ()
}

You can also set a global default via ConstellationBuilder.withCache():

ConstellationImpl.builder()
.withCache(memcachedBackend) // All modules use Memcached by default
.build()

Behavior

  1. Look up the named backend in the cache registry
  2. If found, use that backend for cache operations
  3. If not found, create a new InMemoryCacheBackend as fallback
  4. Proceed with normal cache behavior (check, store, return)
  • cache — Required to enable caching with TTL

Diagnostics

WarningCause
cache_backend without cacheBackend requires the cache option to be set

Best Practices

  • Use memory for local, single-instance caches during development
  • Use memcached or a custom Redis backend for distributed production deployments
  • Configure backends at application startup before running any pipelines
  • Match backend choice to data consistency and latency requirements
  • Use keyPrefix (Memcached) or key namespacing to isolate tenants sharing a cache cluster