Training a Machine to Watch Soccer

Aug 31, 2017 | Alumni News

Training a Machine to Watch Soccer

Identifying dynamic player roles proves critical for understanding coordinated team behavior

Though soccer players have assigned roles, it is routine for players to swap positions throughout the course of a game, or even during a single play. Players and fans recognize when this occurs and now, thanks to work led by engineers at Caltech, so can a computer.

The Caltech team, along with colleagues at Disney Research and STATS, a major supplier of sports data, have developed an algorithm that can automatically recognize formations of teams—how they arrange themselves on the field—when analyzing player tracking data. The algorithm can also imitate players’ behavior.

By understanding how players—in soccer, basketball, or other team sports—coordinate with each other to shift locations, a machine can better analyze the play of each individual athlete. When determining which player is going to run where, however, it helps to know a player’s role (e.g., left back or mid fielder in soccer)—and that information is not included in the raw data.

“We’re training the algorithm to understand soccer at the same level that a fan would. It’s not just mindlessly watching faceless players move across a field; it’s watching strikers and right midfielders and forwards arrange themselves in specific formations,” says Yisong Yue, assistant professor of computing and mathematical sciences in the Division of Engineering and Applied Science at Caltech. Yue collaborated on the study with lead author and Caltech graduate student Hoang Le; Peter Carr of Disney Research; and Patrick Lucey of STATS.

“This new capability, however, has applications well beyond sports,” says Markus Gross, vice president for research at Disney Research. “These include the control of teams of robots for emergency response, autonomous vehicle planning, and modeling of collective animal behavior.”

The researchers presented their findings on August 8 at the International Conference on Machine Learning in Sydney, Australia.

In other soccer-related work presented earlier this year at the MIT Sloan Sports Analytics Conference, the researchers demonstrated that computers could review game footage and indicate where defending players ideally should have been—based on what the attacking team was doing—and identify occasions when defending players were out of position. This earlier work, however, relied on humans to identify player positions and team formations. Now, the algorithm can automatically discern player roles—and how they change during the course of the game.

To do so, the researchers combined supervised deep learning with unsupervised graphical models. Deep learning is a suite of powerful machine learning techniques that rely on brain-inspired programs called neural networks. In this case, the neural networks studied and then learned to imitate the demonstrated behaviors of professional soccer players—when, for example, a player would run to the left instead of to the right.

However, deep learning by itself cannot learn to imitate player behavior well, because the player roles are not annotated in the raw data. To resolve this issue, the researchers employed graphical models, which encoded basic domain knowledge about how different roles behave—for example, players in different roles tend to occupy different spaces on the playing field. This allowed the algorithm to infer roles from the raw, unannotated data.

In an experiment involving data from 45 games played by European professional soccer teams, the researchers used this approach to simultaneously learn both how to infer roles from the raw data and how to imitate each role (not including the goal keepers).

The researchers also ran experiments on a predator-prey simulation game, where four predators and one prey are positioned on a grid. The predators must coordinate their actions to capture the prey in the least possible time. The algorithm quickly approached human performance, Le noted.

As part of this publication, STATS has released the tracking data to fuel further research in multi-agent learning.

The paper is titled “Coordinated Multi-Agent Imitation Learning.” This research was supported by the National Science Foundation; the Jet Propulsion Laboratory, which is managed by Caltech for NASA; a Bloomberg Data Science Research Grant; and Northrop Grumman.

Written by Robert Perkins

Related Articles

SHARE