Surgical planning through the use of technology is the order of the day, currently, doctors can immerse themselves in a virtual environment where they can see and interact with the patient's anatomy.
How is this achieved?
Virtual reality allows us in a simple way to enter an environment in which, together with 3D models, to carry out experiences whether they are visual or interactive.
Through Computed Tomography (CT) we will be able to extract a 3D model of the patient's anatomy using segmentation algorithms through the Dicom files generated by CT.
What does the 3D model of the patient's anatomy allow us to achieve?
With the 3D model, we will be able to see in detail the bone structure and the organs of the patient to operate, the surgeon will be able to move freely through the virtual environment to carry out an exploration detecting problems before carrying out the surgery.
In addition, the functionalities of these applications allow us to enter images, comments, cuts in the anatomy to see the position of the organs and veins in search of imperfections or foreign agents.
It allows you to create different scenarios to prepare for them, helping to achieve the best results.
These applications will also introduce collaborative modules for use in training sessions or consultations with different surgeons in complex operations.
In short, it allows us to explore the patient from the inside, on a real scale, thus creating an environment as similar as possible to what the surgeon will find when performing the actual operation.
Once the planning is done, how does mixed reality help in the intervention process?
What mixed reality allows us is the possibility of combining the real environment with virtual elements and interacting with them.
With the mixed reality device, holograms of patient images and test results can be obtained with a simple movement and without having to look down, share the surgery in real time with other doctors and consult with experts.
One of the advantages of the use of mixed reality in surgery is the impact on patients, since this technology allows more exact interventions thanks to the widening of the field of vision, reducing risks and improving patient recovery.
What does ARSOFT contribute in this field?
ARSOFT has its own surgical planning platform, NEXTMED, developed in conjunction with VisualMed System and the University of Salamanca is a platform that includes all the aforementioned functionalities and more:
- Because it is a platform that allows the upload of DICOM files resulting from the TC.
- Because it allows integration with hospital servers, achieving a safe treatment of patient information and medical images.
- Because its automatic segmentation algorithms use artificial vision and artificial intelligence.
- Because it automatically generates segmented point clouds.
- Because it allows planning advanced surgeries through Augmented Reality and Virtual Reality.
- Because it automatically generates 3D meshes of segmented anatomical structures.
- Because it is a multiplatform software that allows us to view it on mobile devices, PC, Virtual Reality glasses and Mixed Reality glasses
In addition, the use of NEXTMED is very simple and can be carried out through 3 simple steps:
- Upload of DICOM files generated through TC.
- The system automatically segments the images and creates a 3D model.
- Study and manipulate the 3D model generated with Augmented Reality and Virtual Reality.