View in #cornerstone3d on Slack
@Aryan_Morady: Hello everyone, I hope you’re doing well. Im a newbie to Cornerstone js, I was just facing some issue trying to implement a basic batch loading as I am using a simple fastAPI and S3 storage for my project. My main issue is that I am trying to replicate the batch loading from OHIF viewer for CT modalities, where initially I only load 10 slices to the viewport, and as the user scroll hit the upper bound, the scroll tool event listener triggers another range of GET requests to get the next 10 slices, but I’m not sure how I can dynamically append the new slices to the volume or the viewport without having to re-render it everytime? would greatly appreciate any help
@Syed_Pasha: it would be helpful if you can share a snippet of your code.
@Aryan_Morady: Well this is the main part of my code and I have been trying to experinment with progressive loading but I can not get it working with my setup since I am probably using fastapi rather than dcm4chee. Below is the main function that does all the heavy lifiting, im not sure if its even necessary to do an explicit batch loading of 10 slices at each time, since I have seen the progressive loading examples on cornerstone and on OHIF viewer but couldnt get those implementations to work with mine. I’ll paste the snippet but if you feel like the whole code file would be helpful I can upload that. I appreciate your help and insight.
const handleFileSelect = useCallback(
async (bucketName: string, path: string) => {
if (!renderingEngine || !toolGroup || !aggregatorData) return;
try {
setLoading(true);
setError(null);
setSelectedFile(path);
setSelectedBucket(bucketName);
// 1) Find the selected file's series_uid in aggregatorData
const bucketEntry = aggregatorData.buckets.find((b) => b.name === bucketName);
if (!bucketEntry) {
throw new Error(`Bucket ${bucketName} not found in aggregator data.`);
}
const selectedFileObj = bucketEntry.files.find((f) => f.filename === path);
if (!selectedFileObj) {
throw new Error(`File ${path} not found in bucket ${bucketName}.`);
}
const seriesUid = selectedFileObj.series_uid;
await console.log(seriesUid)
await console.log(selectedFileObj.modality)
const seriesFiles = bucketEntry.files.filter((f) => f.series_uid === seriesUid);
let allBlobs;
// Check if the selected file is a CT scan
if (selectedFileObj.modality == 'CT') {
const response = await fetch(`<http://localhost:8152/series/${seriesUid}/sorted-positions>`);
if (!response.ok) {
throw new Error(`Failed to fetch sorted positions: ${response.statusText}`);
}
const sortedPositions = await response.json();
console.log(sortedPositions);
// Optionally pick the first 10 positions (adjust as needed)
const selectedPositions = sortedPositions.slice(0, 10);
// 3) Fetch each file in sorted order
allBlobs = await Promise.all(
selectedPositions.map(async (posInfo) => {
const cacheKey = `${bucketName}/${posInfo.filepath}`;
if (dicomFileCache.has(cacheKey)) {
const cached = dicomFileCache.get(cacheKey);
if (cached) return cached;
throw new Error("Cached DICOM data is null.");
}
// Fetch DICOM from backend
const fileResponse = await fetch(`
<http://localhost:8152/file/${bucketName}/${posInfo.filepath}>`,
{
headers: { Accept: "application/dicom" },
}
);
if (!fileResponse.ok) {
throw new Error(`Failed to fetch DICOM file: ${fileResponse.statusText}`);
}
const blob = await fileResponse.blob();
dicomFileCache.set(cacheKey, blob);
return blob;
})
);
} else {
// --- Scenario: Normal file(s) selected ---
// 2) Fetch & process each file directly
allBlobs = await Promise.all(
seriesFiles.map(async (fileObj) => {
const cacheKey = `${bucketName}/${fileObj.filename}`;
if (dicomFileCache.has(cacheKey)) {
const cached = dicomFileCache.get(cacheKey);
if (cached) return cached;
throw new Error("Cached DICOM data is null.");
}
// Fetch DICOM from backend
const fileResponse = await fetch(
`<http://localhost:8152/file/${bucketName}/${fileObj.filename}>`,
{
headers: { Accept: "application/dicom" },
}
);
if (!fileResponse.ok) {
throw new Error(`Failed to fetch DICOM file: ${fileResponse.statusText}`);
}
const blob = await fileResponse.blob();
dicomFileCache.set(cacheKey, blob);
return blob;
})
);
}
// We'll accumulate all imageIds from each file's processDicomFile
const allImageIds: string[] = [];
for (const b of allBlobs) {
const ids = await processDicomFile(b);
allImageIds.push(...ids);
}
// Clean up existing viewports
cleanupViewports();
// 4) If multiple slices => treat as volume
// Or if there's only 1 slice => treat as single DICOM
if (allBlobs.length > 1) {
console.log('volume approach')
// Volume approach
const uniqueVolumeId = `cornerstoneStreamingImageVolume:${seriesUid}`;
if (isMultipleViewports) {
// MPR
const viewportInputArray = [
{
viewportId: threeViewportIds[0],
type: ViewportType.ORTHOGRAPHIC,
element: axialRef.current!,
defaultOptions: {
orientation: OrientationAxis.AXIAL,
background: [0, 0, 0] as Types.Point3,
},
},
{
viewportId: threeViewportIds[1],
type: ViewportType.ORTHOGRAPHIC,
element: sagittalRef.current!,
defaultOptions: {
orientation: OrientationAxis.SAGITTAL,
background: [0, 0, 0] as Types.Point3,
},
},
{
viewportId: threeViewportIds[2],
type: ViewportType.ORTHOGRAPHIC,
element: coronalRef.current!,
defaultOptions: {
orientation: OrientationAxis.CORONAL,
background: [0, 0, 0] as Types.Point3,
},
},
];
await renderingEngine.setViewports(viewportInputArray);
threeViewportIds.forEach((id) => {
toolGroup.addViewport(id, "dcmRenderingEngine");
});
} else {
// Single viewport volume
await renderingEngine.enableElement({
viewportId: "CT_AXIAL",
type: ViewportType.ORTHOGRAPHIC,
element: elementRef.current!,
defaultOptions: {
orientation: OrientationAxis.AXIAL,
background: [0, 0, 0] as Types.Point3,
},
});
toolGroup.addViewport("CT_AXIAL", "dcmRenderingEngine");
}
let eventNumber = 1;
eventTarget.addEventListener(VOLUME_VIEWPORT_SCROLL_OUT_OF_BOUNDS, ((
) => {
console.log(eventNumber)
eventNumber++;
}) as EventListener);
// Create volume
const volume = await volumeLoader.createAndCacheVolume(uniqueVolumeId, {
imageIds: allImageIds,
});
await volume.load();
// Render
const viewportIds = isMultipleViewports ? threeViewportIds : ["CT_AXIAL"];
await setVolumesForViewports(renderingEngine, [{ volumeId: uniqueVolumeId }], viewportIds);
renderingEngine.renderViewports(viewportIds);
} else {
// Single DICOM
console.log("single dicom")
await renderingEngine.enableElement({
viewportId: "DCM_STACK_SINGLE",
type: ViewportType.STACK,
element: elementRef.current!,
defaultOptions: {
background: [0, 0, 0] as Types.Point3,
},
});
toolGroup.addViewport("DCM_STACK_SINGLE", "dcmRenderingEngine");
const stackViewport = renderingEngine.getViewport("DCM_STACK_SINGLE") as Types.IStackViewport;
await stackViewport.setStack(allImageIds);
await stackViewport.setImageIdIndex(0);
stackViewport.render();
let eventNumber = 1;
eventTarget.addEventListener(STACK_SCROLL_OUT_OF_BOUNDS, ((
) => {
console.log(eventNumber)
eventNumber++;
}) as EventListener);
}
} catch (err: any) {
console.error("Error loading DICOM:", err);
setError(`Error loading DICOM: ${err.message}`);
} finally {
setLoading(false);
}
},
[
renderingEngine,
toolGroup,
aggregatorData,
cleanupViewports,
isMultipleViewports,
]
);
@Bill_Wallace: Have you tried running the interleaved example pointed at your data? https://www.cornerstonejs.org/live-examples/htj2kvolumebasic
Note that there are two types of progressive loading - the interleaved loading loads every 5th image, then various offsets of those images.
The data in the above example is currently missing on our public data server - I will see about getting that fixed, but you should be able to just point it at another back end by changing the URL and data endpoints.
Note that interleaved does NOT work with the JSON model unless you specify a full data set.