A lab at the University of Alberta is setting up surgeons for success with augmented and virtual reality.
The University of Alberta Surgical Simulation Research Lab (SSRL) covers a wide range of techniques and technologies and brings together knowledge from different disciplines from healthcare to computer programming.
“We do everything to create a simulation model to replace human beings being used as a training model,” said Bin Zheng, associate professor at U of A and director of the SSRL. “The surgeons need to practice the surgical skills. We like to create a situation for them to practice without harming the patient, without harming animals. That’s our goal.”
With virtual reality, surgeons are able to train using digital copies to learn the operating procedures. For example, in reconstruction surgery, a digital model could be projected on the missing body part to aid the surgeon in the procedure. So far, the labs uses the technology on head and oral clinical surgeries, but Zheng hopes the technology will be able to be applied to other areas as well.
“I hope in the near future we are able to enhance our computer algorithms to allow us to have a more detailed description digitally of the internal organs to develop a strategy to guiding our strategy on internal organs too,” Zheng said.
Zheng says in some cases, this technology could save hours in the operating room, which saves both time and money. It also allows the surgical team to better understand the procedure, pre-surgery.
With augmented reality, the surgeon is able to get information that they can integrate into the procedure.
The SSRL uses technology to track the patterns and movements of expert surgeons, so that this information can be passed on to novices. One such technology tracks surgeons’ eye movements to create a description of coordination pattern and behaviours. The information can be used to determine the most stressful parts of the procedure, by measuring the surgeon’s pupils. The data can also be used to compare expert surgeons with novices.
“We have solid data to describe eye-hand coordination in a very detailed way,” said Zheng.