Your microscopy files might be on a local SSD, a network share, or an S3 bucket on another continent. Find Nuclei Viewer works in all three cases. What you experience, though, will be very different.
The question that comes up most often when people first try the Viewer: “Can I open files from our network drive?” Yes. The follow-up: “Will it be fast?” That depends entirely on how the network drive is set up. Before you commit to a storage strategy for your lab, core facility, or imaging pipeline, it’s worth understanding why.
Same Image, Two Very Different Experiences
To make this concrete, watch what happens when we open the same large OME-ZARR dataset in two browser tabs: one reading from local disk, one streaming from a remote server.
Side-by-side: the same HCS plate opened from local disk (left) and streamed from IDR over HTTPS (right). Notice how tiles fill in from coarse to fine on the remote side as each pyramid level is fetched, while local tiles appear near-instantly.
What you’re seeing isn’t a flaw. It’s OME-ZARR working exactly as designed. The viewer fires off small HTTP-style requests for only the chunks inside your current viewport. Local or remote, the request pattern is the same. What changes is the cost of satisfying those requests.
Why OME-ZARR Can Be Streamed at All
Traditional microscopy formats like OME-TIFF, CZI, and LIF are largely monolithic. Displaying a single field of view can mean reading metadata scattered across a large file and decompressing contiguous chunks you don’t actually need.
OME-ZARR flips this model. Data is stored as a chunked, multi-resolution pyramid: each zoom level is a separate set of small tile files, and the viewer only fetches the chunks that fall inside your current viewport. Zoom out to see a 96-well plate overview and you’re loading tiny low-resolution tiles. Zoom into a single nucleus and only the high-resolution chunks for that region are requested. Nothing else.
Find Nuclei Viewer is built entirely around this access pattern. It speaks directly to OME-ZARR stores over the File System Access API (for local files) or standard HTTP range requests (for remote URLs), with GPU-accelerated tile rendering in the browser. No server-side processing, no upload step.
The Storage Setup Is the Variable
When you drag a local folder into the viewer, your browser reads chunks directly from the filesystem. An NVMe SSD returns those in microseconds. A spinning hard drive is slower, but still local: no network latency, no contention with other users.
When you paste a remote URL, every chunk becomes an HTTP range request. This is where your infrastructure starts to matter.
HTTP and S3-Compatible Object Storage
These are purpose-built for this pattern. Range requests are first-class citizens. A well-configured object store like AWS S3, Azure Blob Storage, or a self-hosted MinIO instance will serve hundreds of small chunk requests efficiently, and browser caching makes frequently-accessed tiles feel near-instant on subsequent views.
MinIO is worth knowing about: it’s an open-source, S3-compatible object store you can run on-premises on top of existing hardware. A practical choice for pharma or academic environments that can’t move sensitive data to public cloud.
If you’d rather skip the MinIO setup entirely, Find Nuclei Data Server is a single Docker container that does exactly this: point it at a directory of OME-ZARR files and it serves them over HTTP, ready for the Viewer to stream. No configuration files, no object store to manage. One command and your data is accessible to anyone on your network.
NFS and SMB Network Shares
These protocols were designed for file-level access by local network clients, not for the burst-of-small-requests pattern that a chunked viewer generates. Each chunk request incurs round-trip overhead that compounds quickly, especially over VPN or when the NFS server is under load from other users.
A dataset that opens smoothly from a local SSD may feel sluggish when accessed from an NFS mount, even on a fast internal network. You may find that streaming the same file from a properly configured HTTP endpoint is noticeably faster than reading it from an SMB share presented as a mapped drive.
What This Means in Practice
Pharma and biotech labs running high-content screening produce terabytes of imaging data per run. The instinct is often to mount central storage as a network drive and open files from there. That works, but if performance is poor, the fix isn’t necessarily to copy data to local disk. Putting an HTTP layer in front of your existing storage is often enough. MinIO can sit in front of a NAS or object store and expose it as an S3-compatible endpoint. nginx can serve a directory of ZARR files over HTTP with a few lines of configuration. Your data stays in one place, multiple users can access it simultaneously, and the viewer’s chunked request pattern works as intended.
Academic labs and core facilities get the same performance benefit, plus Find Nuclei Viewer’s shareable deep links. Once your data is on HTTP-accessible storage, you can share a URL that opens to your exact view: same position, zoom level, channel settings, and colors. Your collaborator clicks a link and sees exactly what you see: no zipping, no waiting for a 50 GB upload to finish, no “did you get the file?” back and forth. Publicly available datasets on IDR are already served this way. Open Find Nuclei Viewer and use the “Open Example” button to stream a real dataset from IDR, or watch the demo videos to see it in action.
For sensitive or pre-publication data, you have two good options. The first is local drag-and-drop: your data never leaves the machine, no upload, no account, no server in the loop. The viewer runs entirely in the browser.
The second is serving data from a secured source. Find Nuclei Viewer supports token-based authentication, so you can host data centrally and share access only with the people who need it. Find Nuclei Data Server supports this out of the box. A practical middle ground between fully local and fully open.
A Quick Reference for Common Setups
| Storage Setup | How to Open | Chunk Request Behavior |
|---|---|---|
| Local SSD / HDD | Drag & drop folder | File System Access API, no network involved |
| NFS / SMB mounted as local drive | Drag & drop folder | File System API, but network-backed: latency varies |
| Find Nuclei Data Server | Paste remote URL | HTTP range requests, zero config, optional Bearer token auth |
| nginx serving a directory over HTTP | Paste remote URL | HTTP range requests, efficient |
| MinIO or S3-compatible store | Paste remote URL | HTTP range requests, efficient, scales to many users |
| Public repositories (IDR) | Paste remote URL | HTTP, publicly accessible, no setup required |
The OME-ZARR format is the same in every case. What changes is the infrastructure underneath.
Try It With Real Data
Open Find Nuclei Viewer and click “Open Example” to pick from a curated list of public IDR datasets — no download, no account. Then open a local OME-ZARR folder in a second tab and compare the two side by side.
Want to see it first? The demo videos show the viewer in action across different dataset types.
The goal isn’t to declare one approach universally faster. It’s to understand what’s happening under the hood so you can make a deliberate choice about your storage setup, rather than discovering that your NFS mount wasn’t designed for this access pattern after you’ve already built your pipeline around it.
TL;DR
- Find Nuclei Viewer is free, browser-based, no install, no upload, no account.
- Supports local files via drag-and-drop and remote files via HTTP/HTTPS URL.
- HTTP and S3-compatible storage work great. NFS/SMB mounts work but may be slow due to per-chunk latency.
- Wrapping NFS/SMB with nginx or MinIO is usually enough to fix performance.
- For sensitive data, local mode keeps everything on the machine.