16 comments

  • carderne 43 minutes ago
    I did something similar once for a mining technique called “core logging”. It’s a single photo about 1000 pixels wide and several million “deep”: what the earth looks like for a few km down.

    Existing solutions are all complicated and clunky, I put something together with S3 and bastardised CoGeoTIFF, instant view of any part of the image.

    Wish I knew how to commercialise it…

    • el_pa_b 22 minutes ago
      I'm curious about the "core logging" photo. Where can I find one? Do you have an implementation of your solution? I would be curious to have a look at it.
  • tomnicholas1 3 hours ago
    The generalized form of this range-request-based streaming approach looks something like my project VirtualiZarr [0].

    Many of these scientific file formats (HDF5, netCDF, TIFF/COG, FITS, GRIB, JPEG and more) are essentially just contiguous multidimensional array(/"tensor") chunks embedded alongside metadata about what's in the chunks. Efficiently fetching these from object storage is just about efficiently fetching the metadata up front so you know where the chunks you want are [1].

    The data model of Zarr [2] generalizes this pattern pretty well, so that when backed by Icechunk [3], you can store a "datacube" of "virtual chunk references" that point at chunks anywhere inside the original files on S3.

    This allows you to stream data out as fast as the S3 network connection allows [4], and then you're free to pull that directly, or build tile servers on top of it [5].

    In the Pangeo project and at Earthmover we do all this for Weather and Climate science data. But the underlying OSS stack is domain-agnostic, so works for all sorts of multidimensional array data, and VirtualiZarr has a plugin system for parsing different scientific file formats.

    I would love to see if someone could create a virtual Zarr store pointing at this WSI data!

    [0]: https://virtualizarr.readthedocs.io/en/stable/

    [1]: https://earthmover.io/blog/fundamentals-what-is-cloud-optimi...

    [2]: https://earthmover.io/blog/what-is-zarr

    [3]: https://earthmover.io/blog/icechunk-1-0-production-grade-clo...

    [4]: https://earthmover.io/blog/i-o-maxing-tensors-in-the-cloud

    [5]: https://earthmover.io/blog/announcing-flux

    • derefr 1 hour ago
      Sounds like an approach that would also work for ML model weights files — just another kind of multidimensional array with metadata.

      I wonder what exactly the big multi-model AI companies are doing to optimize model cold-start latency, and how much it just looks like Zarr on top of on-prem object storage.

      • tomnicholas1 52 minutes ago
        People have literally used Zarr for this - at one point Gemini used Zarr for checkpointing model weights. Not sure what the current fashion in that space is though.

        It's definitely one of many fields that see convergent evolution towards something that just looks like Zarr. In fact you can use VirtualiZarr to parse HuggingFace's "SafeTensors" format [0].

        [0]: https://github.com/zarr-developers/VirtualiZarr/pull/555

    • el_pa_b 2 hours ago
      Thanks for sharing! I agree that newer scientific formats will need to deeply think about how they are deciphered directly from cloud storage.
      • tomnicholas1 1 hour ago
        IMO Zarr is that newer format. It abstracts over the features of all these other formats so neatly that it can literally subsume them.

        I feel that we no longer really need TIFF etc. - for scientific use cases in the cloud Zarr is all that's needed going forwards. The other file formats become just archival blobs that either are converted to Zarr or pointed at by virtual Zarr stores.

        • bwfan123 1 hour ago
          thanks for sharing !
  • mlhpdx 2 hours ago
    A while back I worked on a project where s3 held giant zip files containing zip files (turtles all the way down) and also made good use of range requests. I came up with seekable-s3-stream[1] to generalize working with them via an idiomatic C# stream.

    [1] https://github.com/mlhpdx/seekable-s3-stream

  • rwmj 8 hours ago
    https://dicom.nema.org/dicom/dicomwsi/

    Interesting guide to the Whole Slide Images (WSI) format. The surprising thing for me is that compression is used, and they note does not affect use in diagnostics.

    Back in the day we used TIFF for a similar application (X-ray detector images).

    • yread 4 hours ago
      Digital pathology are just a lot bigger than radiology, we regularly see slides 500k x 500k pixels.
      • el_pa_b 2 hours ago
        Yes, they can be huge, and for modalities like multiplex immunofluorescence with up to 20 channels, you're often dealing with very faint proteomic signals. Preserving that signal is critical, and compression can destroy it quickly.
        • yread 2 hours ago
          CODEX can do up to 120 channels I think. They are also 16/32bit. They are usually just deflated
  • matthberg 9 hours ago
    Seems very similar to how maps work on the web these days, in particular protomap files [0]. I wonder if you could view the medical images in leaflet or another frontend map library with the addition of a shim layer? Cool work!

    0: https://protomaps.com/

    • el_pa_b 9 hours ago
      Thanks! Indeed, digital pathology, satellite imaging and geospatial data share a lot of computational problems: efficient storage, fast spatial retrieval/indexing. I think this could be doable.

      As for digital pathology, the field is very much tied to scanner-vendor proprietary formats (SVS, NDPI, MRXS, etc).

  • tokyovigilante 8 hours ago
    This is really a job for JPEG-XL, which supports decode of portions of larger images and has recently been added to the DICOM standard.
    • iberator 3 hours ago
      No. Jpg conpression sucks. Medical data should not be compressed loosely. PNG and TIFF for the win
      • vrighter 3 hours ago
        unlike jpeg, jpeg-xl supports lossless compression too.
      • nszceta 2 hours ago
        The original JPEG supports a lossless mode.

        JPEG-LL refers to the lossless mode of the original JPEG standard (ISO/IEC 10918-1 or ITU-T T.81), also known as JPEG Lossless, and not to be confused with JPEG-LS (ISO/IEC 14495-1, Transfer Syntax 1.2.840.10008.1.2.4.80), which offers better ratios and speed via LOCO-I algorithm. JPEG-LL is older and less efficient yet more widely implemented in legacy systems.

        The lossless mode in JPEG-XL is superior to all of those.

    • dmd 6 hours ago
      Or IIIF.
  • Sleaker 3 hours ago
    Maybe a bit pedantic, but if you're streaming it, then you're still downloading portions of it, yah? Just not persisting the whole thing locally before viewing it.

    Edit: Looks like this is a slight discrepancy between the HN title and the GitHub description.

    • el_pa_b 2 hours ago
      Yes, I agree. I'm not persisting the WSI locally, which creates a smoother user experience. But I do need to transfer tiles from server to client. They are stored in an LRU cache and evicted if not used.
  • invaderJ1m 5 hours ago
    How does this compare to things like COGs (Cloud Optimised GeoTIFFs) or other binary blob + index raster pyramid formats?

    Was there a requirement to work with these formats directly without converting?

    • el_pa_b 4 hours ago
      Yes there is a requirement to work with the vendor format. For instance, TCGA (The Cancer Genome Atlas - a large dataset of 12k+ human tumor cases) has mostly .svs files (scanned with an Aperio scanner). We tend to work with these formats as they contain all the metadata we need.

      Sometimes, it happens that we re-write the image in a pyramidal TIFF format (happened to me a few times, where NDPI images had only the highest resolution level, no pyramid), in which case COGs could work.

  • lametti 8 hours ago
    Interesting - I'm not so familiar with S3 but I wonder if this would work for WSI stored on-premises. Imposing lower network requirememts and a lightweight web viewer is very advantageous in this use case. I'll have to try it out!
    • el_pa_b 8 hours ago
      When WSI are stored on-premise, they are typically stored on hard drives with a filesystem. If you have a filesystem, you can use OpenSlide, and use a viewer like OpenSeaDragon to visualize the slide.

      WSIStreamer is relevant for storage systems without a filesystem. In this case, OpenSlide cannot work (it needs to seek and open the file).

  • yread 4 hours ago
    You could probably do it completely clientside. I have a parser for 12 scanner formats in js. It doesnt read the pixels, just parses metadata but jpeg is easy and most common anyway
  • isuckatcoding 2 hours ago
    Is there a visual demo of this?
  • tonymet 1 hour ago
    If only we had NFS to begin with
  • Nora23 8 hours ago
    How does this handle images with different compression formats?
    • el_pa_b 3 hours ago
      Currently we only support TIFF and SVS with JPEG and JPEG2000 compression formats. I plan on supporting more file extensions (e.g. NDPI, MRXS) in the future, each with their own compression formats.
  • andrewstuart 4 hours ago
    Please don’t use AWS S3 there’s vast numbers of much cheaper compatible choices.
    • el_pa_b 4 hours ago
      As data scientists, we usually don't get to choose. It's usually up to the hospital or digital lab's CISO to decide where the digitized slides are stored, and S3 is a fairly common option.

      That being said, I plan to support more cloud platforms in the future, starting with GCP.

    • lijok 3 hours ago
      I guess by "compatible" you mean the data plane.

      There are choices that speak the S3 data plane API (GetObject, ListBucket, etc).

      There are no alternatives that support most of the AWS S3 functionality such as replication, event notifications.

    • kube-system 3 hours ago
      “Cheap” is not always the #1 requirement for a project.
    • thenaturalist 4 hours ago
      Pretty bold half claim while not backing it up with a single data point. :D
      • PunchyHamster 56 minutes ago
        It's trivial to find and there are many alternatives.

        Main problem is most support subset of the more advanced S3 features and often not all that big one. But if you just want to dump some backups in the cloud backblaze and other alternatives is cheaper

      • imhoguy 45 minutes ago
        Especially when you have to account HIPAA/GDPR/legalese, and some serious SecOps behind that.
  • huflungdung 3 hours ago
    [dead]
  • tonyhart7 8 hours ago
    hey, I need this