Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail

Back in 2002, developer Monolith Productions was showing off a technical demo of the AI systems in its upcoming game, No One Lives Forever: A Spy in H.A.R.M.’s Way. The Artificial Intelligence in this game was a pretty substantial leap forward compared to its contemporaries. Rather than simply stand around until the player character walked by, in this game, non-player characters would behave, seemingly, like real people. They’d step outside for a cigarette, engage in idle chatter, go to a nearby soda machine for a drink, and even use the bathroom (characters inside would use toilets, characters outside would find the nearest bush).

Monolith’s head of AI, Jeff Orkin, demonstrated how characters would make dynamic adjustments to environmental changes. He identified a character heading towards the bathroom, and before they arrived, Orkin dropped a grenade in the toilet and blew it up. The guard stepped into the bathroom, saw there was no toilet, and walked back out into the hallway. Then, he dutifully walked up to a nearby potted plant and relieved himself there.

This outcome illustrated the unpredictability of this new type of artificial intelligence. It was a system based on goals, rather than predetermined automation, and because of that, sometimes had unpredictable outcomes. In a 2006 New York Times Profile, Orkin explained it like this:

“We used to manually lay out all of the steps that an agent would take: do this, then do that, and if this other thing happens then try this. Now we tell the agent: here are your goals, here are your basic tools, you figure out how to accomplish it.”

A video game, at its most basic level, is a series of interlocking systems. Those systems repeat on a loop until an unexpected variable—human interaction—disrupts the loop, at which point the AI needs to adapt. In Pac-Man, Inky, Blinky, Pinky, and Clyde drift aimlessly through the play area, until that pesky yellow circle starts gobbling up all their precious dots, at which point their goal changes.

The notion that an AI system follows a set of repeatable functions until a disruption triggers action has a direct correlation to field service management. When systems are operating normally, service is generally not a consideration, but in the event of a disruption, field service is often the solution. Applying artificial intelligence effectively to this system means that AI must adapt dynamically to fulfill its intended goal, weighing options and choosing the ideal quickly, without the nuisance of human baggage, slowdown, and scattered attention that carbon-based inputs suffer from.

We know that Artificial Intelligence for Field Service professionals is moving in this direction. According to Aberdeen’s recent AI perceptions study, the number-one area for AI implementation in field service is task automation, with 90% of firms indicating consideration or active implementation of this technology. Video games have spent decades perfecting task automation in the controlled systems of a digital space. Field Service Managers could take a similar approach and consider how AI interacts with the far-less-controllable systems of the real world, and how they adapt to obstacles in achieving their intended goals.

One of my favorite gaming examples of the ways that goals are laid out to manage an experience is from 2014’s Alien: Isolation. The game’s primary antagonist, the Alien, actually has two AI “Brains” with different goals and capabilities. Brain A controls movement of the alien, searches for the player, and will hunt them down if they see or hear them. Brain B doesn’t control anything, but it knows exactly where the player is. All it can do is provide hints about the player’s location to Brain A.

Let’s look at the goal hierarchy for these two brains. Brain A’s primary goal is to eliminate any moving objects. It has three ostensible input sensors to identify these objects: Sight, sound, and hints from Brain B. If Brain A’s sensors identify sound, any sound, it investigates. One step beyond that, the Alien’s AI has a built-in “jobs” system, which has a list of tasks to complete, the location of those tasks, steps needed to accomplish those tasks, and the priority. These change in real-time based on in-game stimuli, and adjust to obstacles in the environment.

Brain B has one goal: Customer experience. It may seem ludicrous that increasing the likelihood that a person is going to be gobbled up by an Alien would be considered good CX, but keep in mind: The customer chose this product for a specific experience. Because of the randomness of the Alien AI, without some direction, the player could complete the game without encountering the Alien at all. If Brain A has drifted a certain distance away from the player character, Brain B provides Brain A enough information to guide the alien closer to the player.

I think that this duality—one AI instance focused on operations, the other on customer experience—serves as a valuable template for the development of service management solutions. Obviously, the implementation would be very different. For example, consider how AI-powered IoT could mitigate service appointments. An operational focus could use its sensors to identify anomalies and throttle down internal systems, in the same way that the Alien investigates noises. Reacting to that, a secondary system can allocate the appropriate resources for repair. Imagine intelligent routing, with one system’s goal maximizing the number of jobs that can be completed in one day while minimizing service hours worked. The other system could focus on proactive communications and small, accurate service windows based on historical data and empirical projections.

There are dozens of possible use cases across disciplines that could take advantage of this type of AI, but let’s take it a step further than video games. AI-powered systems would be formidable on their own, but they’d have the potential of being more dynamic, more powerful, and more useful when the AI is coupled with advancements in machine learning. With machine learning in tandem, not only can our service management system meet its goals, it can improve upon them by applying historical information and avoiding slowdown, learning to emulate human functions, and making positive habitual behaviors implicit in its decision-making.

Aberdeen’s research has shown that Field Service managers are prepared to take the leap to AI, with 72% of them indicating that within the next five years, AI will be an invaluable tool in their arsenal. For your business to take the first step, think about the sensors that AI would most benefit from using as internal triggers. For the Alien, this was sight, sound, and Brain B. For a connected asset, as an example, this could be temperature, output, and historical data or scheduled maintenance requests provided by a secondary system. The bottom line is: Even before you’re ready to implement an AI solution, start laying the groundwork to future-proof your business.

Artificial Intelligence in video games continues to allow developers to build systems of problem-solvers within the controlled systems of the game world. We’re beginning to take the steps necessary to bring these problem-solvers to systems that interact with the physical world, and doing so intelligently, and with respect to the necessary infrastructure to make it successful, will be a key factor in the evolution of Field Service. Fingers crossed that we can work out any kinks with potted plants before then.

Tom Paquin is a research analyst at Aberdeen, specializing in Service Management. Follow Tom on Twitter to stay up to date on the technology and best practices Industry Leaders are using to maximize operational efficiency.

 

 

Facebooktwittergoogle_pluspinterestlinkedinmailFacebooktwittergoogle_pluspinterestlinkedinmail
Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

Subscribe To Our Newsletter Today and Receive the Latest Content From Our Team!

You have Successfully Subscribed!