Tracking movement across walls and in the dark is made possible using WiFi detection. A new technique for monitoring human activity in the metaverse recently revealed by a team of researchers from Nanyang Technological University in Singapore.
Real-time representation of real-world items and people in a digital realm is a fundamental characteristic of the metaverse. For instance, users in virtual reality can adjust the digital environment by turning their heads to view things differently or by interacting with physical controllers in the actual world.
The current standard for recording human behavior in the metaverse is to use cameras, device-based sensors, or a mix of the two. However as the researchers note in their preprint study, there are immediate limitations to each of these modes.
The researchers state that while device-based sensing systems, including handheld controllers with motion sensors, “only capture the information at one point of the human body, they cannot model very complex activity.” Meanwhile, physical barriers and poor light conditions provide challenges for camera-based tracking systems.
Enter WiFi sensing
For years, researchers have tracked people’s movements using WiFi sensors. The radio waves used to transmit and receive WiFi data can used to detect things in space, much like radar.
WiFi sensors are capable of tracking breathing and sleeping patterns, detecting heartbeats, and even detecting persons through walls.
Researchers studying the metaverse have already attempted, with differing degrees of success, to integrate WiFi sensing with conventional tracking techniques.
Enter artificial intelligence
AI models are require for WiFi tracking. Unfortunately, researchers have found it challenging to train these models.
According to the Singaporean team’s report, current WiFi and vision solutions require large amounts of labeled data, which can be difficult to obtain. […] We offer MaskFi, a novel unsupervised multimodal HAR [human activity recognition] approach that trains models using just unlabeled video and WiFi activity data.
Scientists must create a training data library to train the models necessary to experiment with WiFi sensing for HAR. The data sets required to train AI can comprise thousands or even millions of data points, depending on the model’s goals. Often, categorizing these data sets is the most time-consuming aspect of these investigations.
Enter MaskFi
To tackle this difficulty, a team from Nanyang Technological University developed “MaskFi”. It employs AI models developed through a technique known as “unsupervised learning.”
In the unsupervised learning paradigm, an AI model is pre-trained on a much smaller data set before going through iterations until it can predict output states with a high level of accuracy. This allows researchers to concentrate their efforts on the models rather than the time-consuming task of creating solid training data sets.
G
Furthermore, the results provided by the researchers indicate that the MaskFi system achieved approximately 97% accuracy on two comparable benchmarks. This suggests that, with further development, this system might act as the spark for a completely new metaverse modality. Specifically, envision a metaverse capable of providing a 1:1 real-world representation in real time.