View in #cornerstone3d on Slack
@Rod_Fitzsimmons_Frey: Sorry to be peppering the channel with questions, please let me know if I’m overstepping. Having had some success using Stack viewports, I’m trying to switch to VolumeViewports. I’m loading a DICOM that has no image orientation information in it - I’m getting an error and I don’t know if it’s a Cornerstone bug (0.1%) or I’m misusing the library (99.9%).
I set up the viewport thus:
const viewportInput = {
viewportId: DATAMINT_VIEWPORT,
element: element.current,
// type: ViewportType.STACK,
type: ViewportType.ORTHOGRAPHIC,
defaultOptions: {
orientation: Enums.OrientationAxis.SAGITTAL,
background: [0.2, 0, 0.2] as Types.Point3,
},
};
and eventually call imageLoader.loadImage(imageId)
where imageId is a wadouri
url. The image loads and is cached successfully, and generateVolumePropsFromImageIds
gets called. This eventually gets to extractPositioningFromDataset which looks for tag x00200037
(Image Orientation - Patient) but comes up empty. That’s fine until we return up the stack to generateVolumePropsFromImageIds
where in the next line it assumes the presence of ImageOrientationPatient -
const { ImageOrientationPatient, PixelSpacing, Columns, Rows } = volumeMetadata;
const rowCosineVec = vec3.fromValues(ImageOrientationPatient[0], ImageOrientationPatient[1], ImageOrientationPatient[2]);
const colCosineVec = vec3.fromValues(ImageOrientationPatient[3], ImageOrientationPatient[4], ImageOrientationPatient[5]);
This throws an error since ImageOrientationPatient is undefined.
Am I missing a step, or is VolumeViewport not supported for modalities like ultrasound that might not have patient image orientation?
@Jason_Hostetter: yes, your conclusion that volumes are not supported for 2D modalities like ultrasound is correct. The concept of a volume has no meaning for standard 2D ultrasound (and XR, etc). There are 3D ultrasound acquisitions, but these are not the norm. I would use a regular stack for 2D images, and only use a volume viewport for modalities with 3D data
@Rod_Fitzsimmons_Frey: OK, thanks - the reason I was exploring this was for managing segmentations across image frames - when using a stack for a multi-frame 2d image, it seems like there’s a lot of setup and teardown of the segmentation representations, seg images, and so on every time I change frames. It seemed like the labelmap volume might handle that.
@Jason_Hostetter: Ah yes I see what you mean. I think the issue is that any volume requires a regular 3D spatial relationship between voxels within the volume, in order for things like measurements to make sense across slices, whereas in a multiframe ultrasound, there is no normalized spatial relationship across adjacent frames.