- General FAQ
- Technical FAQ
- Why do I keep seeing a Cross Origin Isolation warning
- What if my setup does not support the Shared Array Buffers API?
- Viewer opens but does not show any thumbnails
- What are the list of required metadata for the OHIF Viewer to work?
- How do I handle large volumes for MPR and Volume Rendering
How do I report a bug?
How can I request a new feature?
At the moment we are in the process of defining our roadmap and will do our best to communicate this to the community. If your requested feature is on the roadmap, then it will most likely be built at some point. If it is not, you are welcome to build it yourself and contribute it. If you have resources and would like to fund the development of a feature, please contact us.
Who should I contact about Academic Collaborations?
Gordon J. Harris at Massachusetts General Hospital is the primary contact for any academic collaborators. We are always happy to hear about new groups interested in using the OHIF framework, and may be able to provide development support if the proposed collaboration has an impact on cancer research.
Does OHIF offer support?
yes, you can contact us for more information here
Does The OHIF Viewer have 510(k) Clearance from the U.S. F.D.A or CE Marking from the European Commission?
NO. The OHIF Viewer is NOT F.D.A. cleared or CE Marked. It is the users' responsibility to ensure compliance with applicable rules and regulations. The License for the OHIF Platform does not prevent your company or group from seeking F.D.A. clearance for a product built using the platform.
If you have gone this route (or are going there), please let us know because we would be interested to hear about your experience.
NO. The OHIF Viewer DOES NOT fulfill all of the criteria to become HIPAA Compliant. It is the users' responsibility to ensure compliance with applicable rules and regulations.
Why do I keep seeing a Cross Origin Isolation warning
If you encounter a warning while running OHIF indicating that your application is not cross-origin isolated, it implies that volume rendering, such as MPR, will not function properly since they depend on Shared Array Buffers. To resolve this issue, we recommend referring to our comprehensive guide on Cross Origin Isolation available at our dedicated cors page.
What if my setup does not support the Shared Array Buffers API?
You can simply disable that by adding the
useSharedArrayBuffer: 'FALSE' (notice the string FALSE), and the volumes will only use a regular
array buffer which is a bit slower but will work on all browsers.
Viewer opens but does not show any thumbnails
Thumbnails may not appear in your DICOMWeb application for various reasons. This guide focuses on one primary scenario, which is you are using
supportsWildcard: true in your configuration file while your sever does not support it.
For instance for the following filtering in the worklist tab we send this request
Which our server can respond properly. If your server does not support this type of filtering, you can disable it by setting
supportsWildcard: false in your configuration file,
or edit your server code to support it for instance something like
For each filter in filters:
if filter.value contains "*":
Convert "*" to SQL LIKE wildcard ("%")
Add "metadataField LIKE ?" to query
Add "metadataField = ?" to query
What are the list of required metadata for the OHIF Viewer to work?
SOPInstanceUID: Unique identifiers for the study, series, and object.
PhotometricInterpretation: Describes the color space of the image.
Columns: Image dimensions.
PixelRepresentation: Indicates how pixel data should be interpreted.
Modality: Type of modality (e.g., CT, MR, etc.).
PixelSpacing: Spacing between pixels.
BitsAllocated: Number of bits allocated for each pixel sample.
SOPClassUID: Specifies the DICOM service class of the object (though you might be able to render without it for most regular images datasets, but it is pretty normal to have it)
You need to have the following tags for the viewer to render the image properly, otherwise you should use the windowing tools to adjust the image to your liking:
RescaleSlope: Values used for rescaling pixel values for visualization.
WindowWidth: Windowing parameters for display.
InstanceNumber: Useful for sorting instances (without it the instances might be out of order)
For MPR (Multi-Planar Reformatting) rendering and tools
ImageOrientationPatient: Position and orientation of the image in the patient.
FrameOfReferenceUIDfor handling segmentation layers.
RTSTRUCT (Radiotherapy Structure)
FrameOfReferenceUIDfor handling segmentation layers.
NumberOfFrames: Number of frames in a multi-frame image.
SequenceOfUltrasoundRegions: For measurements.
FrameTime: Time between frames if specified.
SR (Structured Reporting)
- Various sequences for encoding the report content and template.
PT with SUV Correction (Positron Tomography Standardized Uptake Value)
- Sequences and tags related to radiopharmaceuticals, units, corrections, and timing.
EncapsulatedDocument: Contains the PDF document.
NumberOfFrames: Video frame count .
There are various other optional tags that will add to the viewer experience, but are not required for basic functionality. These include: Patient Information, Study Information, Series Information, Instance Information, and Frame Information.
How do I handle large volumes for MPR and Volume Rendering
Currently there are two ways to handle large volumes for MPR and Volume Rendering if that does not fit in the memory of the client machine.
WebGL officially supports only 8-bit and 32-bit data types. For most images, 8 bits are not enough, and 32 bits are too much. However, we have to use the 32-bit data type for volume rendering and MPR, which results in suboptimal memory consumption for the application.
This is a flag that you can set in your configuration file to force usage of 16 bit data type for the volume rendering and MPR. This will reduce the memory usage by half.
For instance for a large pt/ct study
Before (without the flag) the app shows 399 MB of memory usage
After (with flag, running locally) the app shows 249 MB of memory usage
Using the 16 bit texture (if supported) will not have any effect in the rendering what so ever, and pixelData would be exactly shown as it is. For datasets that cannot be represented with 16 bit data type, the flag will be ignored and the 32 bit data type will be used.
Read more about these discussions in our PRs
Although the support for 16 bit data type is available in WebGL, in some settings (e.g., Intel-based Macos) there seems to be still some issues with it. You can read and track bugs below.
This is another flag that you can set in your configuration file to force the usage of the
half_float data type for volume rendering and MPR. The main reason to choose this option over
useNorm16Texture is its broader support across hardware and browsers. However, it is less accurate than the 16-bit data type and may lead to some rendering artifacts.
Integers between 0 and 2048 can be exactly represented (and also between −2048 and 0)
Integers between 2048 and 4096 round to a multiple of 2 (even number)
Integers between 4096 and 8192 round to a multiple of 4
Integers between 8192 and 16384 round to a multiple of 8
Integers between 16384 and 32768 round to a multiple of 16
Integers between 32768 and 65519 round to a multiple of 32
As you see in the ranges above 2048 there will be inaccuracies in the rendering.
Memory snapshot after enabling
preferSizeOverAccuracy for the same study as above