What's going on underneath your AI?
From hiring to customer service, artificial intelligence rules many domains that once required the delicate human touch.
Is this person nervous? What's my bank account balance? Where do I return these shoes? Is Jane or Bob a better hire for our sales department?
Just a few years ago, it took a human to answer these queries. Today, machines do it faster and — more often than not — better.
From the phone in your pocket to the new voice in your company's boardroom, artificial intelligence touches many facets of personal and professional lives. The growth of human-facing, AI-based tools has given rise to a new set of niche, midmarket companies harnessing the advanced technology for B2B and B2C applications.
The average consumer interacts with AI-based chatbots and hiring tools frequently, many without realizing it. These tools may be simple and straightforward for users, but the complex technology going on underneath is anything but.
Building and fixing the product
Getting a product ready for market can take a lot of time and data. But what happens when the AI needs to forget or correct?
The method for reconfiguring an algorithm depends on what type of AI is being used, according to Josh Sutton, CEO of Agorai, in an interview with CIO Dive. Deep learning, a subset of machine learning, is "pattern matching on steroids," taking different data points and understanding outcomes and answers, whereas symbolic AI is based more on a representation of the world in the way humans understand it.
Correcting symbolic AI systems is relatively straightforward: Developers modify or remove unwanted components. Mistakes in a deep learning algorithm, however, are similar to a bad behavior, demanding a more complicated fix.
Putting a chatbot on a website is easy, but building in security is hard.
VP product marketing and communications of Comm100
Just like a golf player who developed a bad swing, unlearning a bad behavior can't be done in one fell swoop, according to Sutton. Incremental reinforcement has to be provided so that the body — or algorithm — adapts to doing something different than it was trained.
For consumer-facing AI products, such as chatbots, there are various avenues to assess AI output and improve it.
Comm100, a customer service and communication provider, helps customers train deep learning-based bots that interact with their customers, starting with anticipating what questions customers will ask, building out a question base and training a bot, according to Kevin Gao, founder and CEO, in an interview with CIO Dive.
AI will provide a confidence score for answers, and if a score falls below a threshold then the question will be lifted out and assessed, potentially turned into a new question or added to an existing one. Human agents can also signal if an answer is helpful or not, triggering an expert to look into it, Gao said.
Properly trained bots start with good data, but even in an age where petabytes of data are created daily, acquiring good training data is a challenge. Data ownership and the lack of a comprehensive data marketplace means many companies have to cobble together solutions to train their systems, even if these solutions contain biases or incorrect information, according to Sutton.
This problem is coupled by the fact that a few companies, such as Facebook, Google and Amazon, hold more data than the average company could ever dream of, and they leverage this data to maintain market and consumer dominance.
Today, chatbots are advanced enough to anticipate what a customer wants, react to questions in different forms and interact with multimedia. But even the best chatbot needs human reinforcement sometimes.
Digital tools today can scientifically measure dozens of characteristics about an individual's personality, from behavior traits and work styles to motivators and value systems.
CEO of Humantelligence
If a customer is angry or triggering certain keywords, businesses may want a human agent to takeover the interaction — though the customer typically won't realize the transition is happening — according to Gao. Chatbots are reliable, but AI is still just a tool to augment human capabilities, not operate independently of them.
As AI tools become more ubiquitous, security and privacy are recurring concerns for executives. Putting a chatbot on a website is easy, but building in security is hard, according to Jeff Epstein, VP of product marketing and communications, in an interview with CIO Dive.
Tools have to seamlessly integrate across systems, communicate well and be effortless for companies to deploy, he said. Nothing lives in isolation.
For example, a chatbot connected to a bank that is expected to access and pull up information about account balances needs connections to mission critical systems with strong firewalls and APIs. But security can't come at the expense of functionality, and the entire web of technology has to "play well in that technology sandbox."
Did a machine hire you?
AI applications are being integrated into more business tools and platforms, but many employees interact with the technology well before starting their first day of work. AI, ML and DL are steadily taking hold in the recruiting and hiring process — and doing far more than expediting the review of resumes.
Only 30% of predictive success is based off the traditional resume information a hiring manager sees, such as experience, GPA, education and references, according to Juan Betancourt, CEO of Humantelligence, in an interview with CIO Dive. At 60%, the highest indicator of success is "EQ," or emotional intelligence.
The idea of a machine quantifying and assessing the nuances of the human character and personality is an unsettling idea for many. How can a string of binary code assess the subjectivity of the human condition and complexities of workplace culture better than a real person?
A lot better, it turns out.
Digital tools today can scientifically measure dozens of characteristics about an individual's personality, from behavior traits and work styles to motivators and value systems, according to Betancourt. With the right tools, a company can code its culture and use AI to fill the gaps and shortcomings.
The use of advanced analyses like voice and facial recognition bring up privacy questions, such as whether job candidates need to be notified upfront that the tools are being used.
CEO of Montage
AI can help a business look at its top performers and pull out the characteristics that make them successful, then look at the bottom group of performers and figure out what areas need development, Betancourt said. When bringing in new talent, the algorithms can help break the pattern of "people hiring themselves" and improve diversity and objectivity.
AI can also make the process of hiring more efficient, according to Ankit Somani, co-founder of AllyO, in an interview with CIO Dive. For example, the hiring of an engineer has more human touch points than the hiring of a warehouse worker, and businesses need to make sure that value is being generated out of every touch point.
Low touch point positions have greater room for efficiency improvements, and if the amount of time per touch point can be reduced, applicants can apply to more jobs.
For positions like waitressing or sales floor reps, hiring often consists of coming in, meeting the manager and quickly receiving a "you're hired" decision. These jobs have one of the highest rates of turnover, somewhere between 50% and 60%, and the widespread introduction AI-based candidate profiles would have huge impacts in that sector by finding the best fit and reducing turnover, according to Betancourt.
But AI in hiring can easily be misused or misaligned, whether intentionally or not. The "black box" problem of AI, where input goes in and a decision comes out without explanation needs to be addressed, Somani said. Sure, humans often make decisions with fewer data points than an algorithm, but if a machine is going to be narrowing down candidate pools, algorithms need to show why some candidates make it and some don't.
Just like a golf player who developed a bad swing, AI can't unlearn a bad behavior in one fell swoop.
CEO of Agorai
As the interview process takes on more mediums, including automated questions via text, phone and video, there are more ways to assess candidates. For example, just as much can be gleaned in a conversation from how things are said as from what is said, according to Margaret Olsen, SVP of engineering at Cogito, in an interview with CIO Dive. AI applications can pinpoint the emotional contents of the voice, picking up on subtle clues that a human might miss.
Testing empathy and linguistic complexity in speech is still an early tool, though as the market matures more applications will be introduced. Facial recognition, however, is much further out, with only 6% of candidates feeling comfortable if the visual tool is used during an interview, according to Kurt Heikkinen, president and CEO of Montage, in an interview with CIO Dive.
The use of advanced analyses like voice and facial recognition also bring up several privacy questions, such as whether candidates need to be notified upfront that the tools are being used, than an automated decision could make the choice on part of their selection and what data is being collected, Heikkinen said.
Follow Alex Hickey on Twitter