I’ve got to get my video game playing weight up.
My heyday was back in the ’90s shuffling through clunky Nintendo game cartridges with a school friend as we weighed playing Duck Hunt for the umpteenth time, Donkey Kong or something else. I’d go on to spend some time with Sonic the Hedgehog — that fiery little rascal — throw some kicks around in Mortal Kombat, and make my way through this NBA basketball video game in which the big gimmick was that all the players had these outrageously-sized heads.
As time went on, I played games less, but I always marvel at how they continue to evolve — the technology delivering this entertainment often pointing ahead to what’s on the horizon.
Among the many capabilities today’s gaming technologies afford players is the chance to play not just virtually but also with and against people who do not have to be sitting next to them. Gaming partners can be many, many miles away, and in these environments, players can pick a character to represent them or choose something like themselves.
To add another layer to the mix, it looks like some day in the future, the likenesses people introduce to the playing field — their avatars — may show their player’s facial expressions in real time.
Happy, sad, disgusted or mad, real player emotions may be coming to a screen near you, and that could have some important implications for virtual learning. According to new report published in the International Journal of Computational Vision and Robotics, a team of researchers has created a system that can read emotions as expressed through facial expressions with nearly 99 percent accuracy.
A team of researchers from Soongsil University’s School of Media and Vietnam National University developed the system, which uses a computer algorithm to measure things such as mouth shape, eyebrow position, the openness of eyes and other factors to correlate them with human emotions like fear, surprise, joy, sadness and anger. The system was used on thousands of test facial images and found to be incredibly accurate.
If intelligent gaming systems used technology like this to recognize users facial expressions, and then conveyed them via avatar, it would make gaming experiences “more interactive, vivid and attractive,” the team said. I guess I can understand how this new feature could boost avatar attractiveness — a placid expression plastered across an avatar’s face can be a bit annoying when for all the back button pushing you’re doing, you just can’t seem to get out of a dead end, and your avatar, frustratingly, seems more serene than ever.
More to the point, this type of expression recognition technology will be extremely valuable in the virtual learning laboratories we can all anticipate seeing and hearing more about as time goes on. The human element, even in a pixelated form, in learning is crucial. Without a raised hand, a brief expression of confusion is often all that stands between an important concept being absorbed for later use or being glossed over and forgotten.
Ensuring that the communication cues that make the physical classroom environment so valuable can be accurately conveyed in a virtual space could be a game changer.
Bravetta Hassell is a Chief Learning Officer associate editor. Comment below or email email@example.com