To borrow a concept from a legendary car commercial, today’s artificial intelligence is “not your father’s” AI. Any technology — or multiple technologies in the case of AI — that does not advance becomes irrelevant, but AI is more powerful and better suited to many more commercial applications today than it was 10 years ago. And it continues to develop.
One of the primary reasons businesses flock to AI, and specifically to machine learning (ML), is because it works in a human-like manner, learning from experience as it processes hundreds of thousands or millions of examples of the thing it has been tasked with figuring out. The classic example is how Google trained AI to identify cats in YouTube videos without providing explicit rules for the AI to use.
As the use of AI has expanded into more use cases, though, the inability of humans to understand precisely how AI makes decisions has become problematic. If a car company doesn’t know how AI operates an autonomous vehicle, how can the executives responsible in the event of serious accidents prevent them? This lack of transparency in AI is known today as AI’s “black box” problem.
‘Black Box’ Hampers AI’s Expansion
“At the moment, some machine learning models that underlie AI applications qualify as ‘black boxes,’” according to “What it means to open AI’s black box,” an article by two AI experts at consulting firm pwc. “That is, humans can’t always understand exactly how a given machine learning algorithm makes decisions … To reach the point where AI helps people work better and smarter, business leaders must take steps to help people understand how AI learns.”
Unlocking the black box is so vital for mission-critical uses of AI, such as next-generation weapons, that it has become a priority for the Defense Advanced Research Projects Agency (DARPA). The federal agency that pioneered AI launched the Explainable AI (XAI) program to address the issue in 2016.
“Continued advances promise to produce autonomous systems that will perceive, learn, decide and act on their own. However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users,” writes David Gunning, program manager for DARPA’s Information Innovation Office.
Assaf Resnick, CEO of autonomous operations platform BigPanda, spells out three downsides to black-box AIOps.
- “There’s no transparency, so your IT Ops and NOC teams don’t understand how and why their black-box AIOps solution is making decisions a certain way.”
- “There’s no testability because black-box machine learning systems don’t surface their logic and let users test and preview the results before deploying the logic to production.”
- “There’s no control. Users can’t incorporate their hard-won tribal and business knowledge into the logic created by the ‘machine.’”
“Taken together,” Resnick adds, “this means that it’s hard for IT Ops teams to trust black-box machine learning, feel confident in the results generated by black-box tools, and embrace and use these tools.”
Managing IT Ops Complexity with AI
Meanwhile, the use of AI and ML is expanding in many business functions, including IT operations, where the emerging trend is known as AIOps. Charley Rich, senior director analyst for Gartner, predicts that approximately half of all enterprises will actively use AIOps by 2020, five times the percentage that had adopted AIOps by last year.
“IT operations is challenged by the opposing forces of cost reduction on the one hand and increasing operations complexity on the other,” according to the Market Guide for AIOps platforms by Gartner. “The complexity can be defined across the three dimensions of volume, variety and velocity.”
In a paper titled, “The Future of IT Ops Is Autonomous,” Nancy Gohring, senior analyst, application and infrastructure performance at 451 Research, observes that “complex IT environments generate a volume of IT operations data so large that humans literally can’t effectively evaluate it … Tools that employ machine learning and enable automation, as opposed to traditional, rules-based tools, relieve challenges that occur when teams are inundated with alert storms and struggle to collaborate due to silo-ed data.”
Opening the Black Box for AIOps
To crack that black box open, BigPanda’s solution features Open Box Machine Learning.
“Open Box Machine Learning means that IT operations teams can visualize and understand the machine learning logic that drives its intelligent automation processes. In addition, users can control and customize its automation logic by adding the situational, historical and business knowledge that is unique to their organization,” says Elik Eizenberg, Big Panda’s cofounder and chief technology officer.
In addition to transparency and control, Open Box Machine Learning provides testability, meaning that users can preview the results the algorithm will produce after it is modified. Because ML is not traditionally based on human logic, algorithm changes that make sense to experienced subject matter experts might not return the exact results they expect, and additional tweaks may be necessary.
“AI is not supposed to replace human decision-making; it is supposed to help humans make better decisions,” observes AJ Abdallat, CEO of Beyond Limits and a member of the Forbes Technology Council, in an article for Forbes. “We need to open the black box. If people do not trust the decision-making capabilities of an AI system, these systems will never achieve wide adoption. For humans to trust AI, systems must not lock all of their secrets inside a black box,” he says.
“A black-box solution may imply that IT staff will not need to apply their experience or knowledge,” agrees Stefan Apitz, an independent technical consultant and an advisor to BigPanda.
Even with open AI technology, it will take time for IT Ops teams to implement new practices and become comfortable with new ways of working — and this will require patience on the part of IT management.
Nonetheless, “automation in combination with advanced learning capabilities is essential because IT environments are getting more complex and more dynamic,” Apitz points out. “To that end, the sooner one implements frameworks to move in that direction, the better it is.”