During the last week the MNE-CPP team started to work on the new 3D library, called disp3D. The library is based on the new Qt3D module, which just has been released as a module in the current Qt version. The main benefit when using the Qt3D module is the fact that we do not have to work with low level OpenGL routines. This makes handling 3D stuff way easier, especially for none OpenGL experts. Also, the new module is extremely future proof due to a strong community support and the already announced upcoming Vulkan integration (Vulkan is the successor language of OpenGL). Another factor which is great about using Qt3D is the fact that compiling MNE-CPP will become a lot easier since the Qt3D module is now shipped as a fixed part of Qt. Please find below some screenshots of the first visualization results.
The three pictures above demonstrate the 3D visualization of FreeSurfer data with curvature and annotation information. The two figures to the left show pial type surfaces, one rendered with curvature and the other one with annotation information. The most right figure shows a sneak peek of our real time source localization visualization. The source space is not clustered and therefore has a much higher spatial resolution. Each black dot in the figure represents one neuronal source. The sources change their vertex color respectivley to the the incoming source estimates. The left hemisphere shows an orig FreeSurfer surface with underlying annotation information. The right hemisphere shows a pial surface type with underlying curvature information. The main challenges when dealing with real time source localization visualization are the efficient “activation to rgb color” calculations, real time spatial acitiviy smoothing and internal data handling.
The two videos below demonstrate the new real time display in action. The first video shows the original (non-clustered) source space, different normalization factors, annotation/curvature support and different colormaps. The stimulus was chosen to be a left auditory stimulus. All activations are plotted only for the vertices which correspond to the prior chosen sources. The second video shows the new annotation based visualization. In contrast to the first video the annotation labels are used to display the activation. Since more than one source can lie in one label, we choose the source with the highest activation and then generate the color for the entire label. Again, we use a left auditory stimulus and a full (non clustered) source space. Please note that the real time smoothing is currently undergoing development.
Since 3D visualization is a key feature for current and upcoming MNE-CPP projects, we are putting a lot of effort in creating a highly flexible, efficient and stable 3D library. Disp3D is therefore developed with much care and passion by a subgroup of the MNE-CPP team. Check out the planned features which you can expect to be included in the final disp3D version:
- Control GUI with loaded data manager and general view options
- Selection tool for easy vertex and ROI selection
- New shader scripts (similar to the “Glass Brain Project”)
- Online sensor and source data visualization
- BEM, forward solutions, sensor location, DTI, MEG helmet and MRI visualization
- Convenient video and screenshot creation
- Multimodal data handling (i.e. EEG and MEG data visualization at the same time)
- Ray-Tracing/Volume rendering (long-term)