VFS Cache settings

Am I right in thinking that it’s impossible to move the VFS cache to a different drive using Rcloneview?

My cache is defaulting to my Linux system drive, and despite being set at it’s default limited size it is filling up without limit - a test run saw it use up 85gb of system drive and grind the system to a halt.

  1. Is it possible to move the cache using Rcloneview, and if so how (can’t find any entry to change it)?
  2. If not, why is it growing without limit despite settings saying it shouldn’t?

Thanks

Hi Pegasusphil — thanks for the detailed report.

1) Moving the VFS cache to a different drive

Right now, RcloneView doesn’t expose a UI control to choose the VFS cache folder, so you can’t change it from the Settings dialog yet.

We’ll add “Select / Set VFS cache folder” in the next RcloneView release (so you can point it to another disk easily).

A. Use XDG_CACHE_HOME to relocate rclone’s default cache root (recommended)

rclone’s default cache location on Unix is $XDG_CACHE_HOME/rclone (if XDG_CACHE_HOME is set), otherwise $HOME/.cache/rclone.

So if you launch RcloneView with XDG_CACHE_HOME pointing to another drive, rclone’s VFS cache will follow.

export XDG_CACHE_HOME=/mnt/bigdisk/xdg-cache
/path/to/RcloneView

B. Symlink the default cache folder to another drive

If your cache is currently under ~/.cache/rclone, you can move it and symlink it:

Best,
RcloneView Team

# stop/unmount first
mv ~/.cache/rclone /mnt/bigdisk/rclone-cache
ln -s /mnt/bigdisk/rclone-cache ~/.cache/rclone

(If XDG_CACHE_HOME is already set on your system, the directory to move/symlink would be $XDG_CACHE_HOME/rclone instead.)

2) Why it can grow “without limit” even if a limit is set

In rclone, –vfs-cache-max-size is not a strict hard cap. The docs (and rclone maintainers) explain the cache can exceed the quota mainly because:

  1. The limit is only checked periodically at --vfs-cache-poll-interval (default ~1 minute).
  2. Open files cannot be evicted from the cache.
  3. Once exceeded, rclone evicts least-recently-accessed files first (oldest access time first).

That combination can absolutely lead to scenarios like “it hit 85 GB and froze the system,” especially during large scans/reads or workloads that keep files open.

Thanks for the update. Glad to hear there’ll be an option to move the cache in a future release.

On the ‘cache filling the disk’ issue, I’m not sure if I’m reading your reply correctly - does it imply that if I’m doing a big backup job, say 1tb or more, even if I put the cache on a big disk there’s a good chance it will fill the cache disk regardless of any limit I put in the settings? Does it mean that the cache disk needs to have more capacity than the total size of the job being done?

Thanks!

No, that’s not quite what it implies. Let me clarify:

The soft limit issue is about temporary overshoot, not unbounded growth. For a 1TB backup job, the behavior depends on what --vfs-cache-mode you’re using:

For backup/upload scenarios, you likely don’t need full cache mode at all. With writes mode, only files being written are cached temporarily, then uploaded and evicted. The cache acts as a staging area, not a mirror of everything.

With full mode, files are cached as they’re accessed. But rclone does evict files to stay near the limit — it just does so on a poll interval rather than instantly. For a streaming 1TB backup, it would continuously evict old cached chunks as new ones come in, so the cache shouldn’t balloon to 1TB.

The real risk is more specific: if you’re reading/writing many large files simultaneously and the poll interval is too slow to keep up with the inflow, you can temporarily exceed your limit. But for a sequential backup job, this is rarely a problem.

Practical advice for large backup jobs:

  • You don’t need a cache disk larger than the job size
  • Set --vfs-cache-max-size to something reasonable (e.g., 10–50GB as a buffer)
  • Lower --vfs-cache-poll-interval to something like 10s or 15s so eviction happens more aggressively
  • Consider whether you even need VFS cache for your backup use case — if you’re just uploading, it may be unnecessary overhead

Ah, got it - that makes sense. Thanks for the speedy reply!