Need to do something about this:
|
# TODO: code below does not work yet |
|
image = cli_input_to_ngff_image(backend, [mapped_path]) |
|
multiscales = to_multiscales(image, method=Methods.DASK_IMAGE_GAUSSIAN) |
|
store, chunk_store = _make_multiscale_store() |
|
to_ngff_zarr(store, multiscales, chunk_store=chunk_store) |
|
|
|
return store |
That code errors when trying to load an ome.tif file. Need to fix that.
Then later optimization possiblities:
Right now, the remote-renderer package does not actualy request chunks from the returned Zarr store. remote-renderer really wants the image metadata (spacing, array size, etc.) to puppatear agave-renderer. Would be sweet if this chunk of code simply generated NGFF-Zarr metadata for the single image scale raw input, without calling to_multiscales, whose output will never be used.
Or we take bull by horns and feed the NGFF-Zarr output of remote-image::remote_zarr.py to agave-renderer service.
Need to do something about this:
itk-viewer/packages/remote-image/itk_viewer_remote_image/remote_zarr.py
Lines 62 to 68 in 9a60667
That code errors when trying to load an
ome.tiffile. Need to fix that.Then later optimization possiblities:
Right now, the remote-renderer package does not actualy request chunks from the returned Zarr store. remote-renderer really wants the image metadata (spacing, array size, etc.) to puppatear
agave-renderer. Would be sweet if this chunk of code simply generated NGFF-Zarr metadata for the single image scale raw input, without callingto_multiscales, whose output will never be used.Or we take bull by horns and feed the NGFF-Zarr output of
remote-image::remote_zarr.pytoagave-rendererservice.