Businesses are making their way through generative AI adoption plans, searching for value amid immense hype. Interest in the technology and the monthslong deluge of generative AI-powered tools have put implementation within reach.
However, most CIOs are taking a measured approach. Successful implementation will require a focus on ethics, privacy and security. Guardrails within services and tools as well as ground rules for acceptable use will separate enterprise success from low-level experimentation.
From the IT service desk to the software development pipeline and even outside of IT, generative AI is positioned to impact the way work gets done. By leaning on acceptable use policies and ethical frameworks, CIOs can confidently move forward even as questions linger related to regulatory action.
In this Trendline, CIO Dive explores where and how leaders are navigating generative AI adoption.
More companies are ramping up generative AI pilots
The share of companies piloting the technology has tripled in less than six months, Gartner data shows.
By: Lindsey Wilkinson• Published Oct. 3, 2023
More companies are entering the initial stages of generative AI adoption, according to Gartner research published Tuesday.
More than 2 in 5 executives say their organization is currently piloting generative AI, according to the survey of more than 1,400 leaders in September. Another 10% of organizations are in the production stage.
The survey marks a significant increase from Gartner data collected between March and April this year, which showed only 15% of organizations were piloting generative AI and 4% of organizations were in the production stage, according to Gartner.
It has been less than a year since ChatGPT debuted, sparking a generative AI wave that has engulfed 2023.
Vendors responded quickly, adding generative AI features to popular tools and introducing new services. But overall, enterprises have taken a measured approach to adoption.
Several practical concerns drove enterprise caution. Many generative AI tools initially lacked security guardrails and safety measures. Other issues include cost and infrastructure requirements, which vendors have tried to mitigate through partnerships and investments in their own capabilities. Hallucinations and incorrect information present another deterrent factor.
As vendors improve performance and safety, 78% of executives say the benefits of generative AI outweigh its risks, according to Gartner’s data.
“I see a pretty equally rapid evolution on the vendor side of things,” Jeff Wong, global chief innovation officer at EY, said. “Even if I reached back six months ago, they recognized, first of all, that privacy was a problem… and that was not an acceptable solution to enterprises, but it was almost immediately recognized and people addressed that whether it was through segregation of servers or data assets.”
For enterprise use, technology leaders are tasked with finding valuable use cases, identifying which tools to use, running pilots and making sure employees are armed with knowledge and resources to leverage the technology.
CIOs can lean on technology teams and those outside traditional IT functions to identify use cases, an implicit acknowledgment that generative AI implementation traverses traditional business lines.
Enterprises are looking to implement the technology in multiple areas. Nearly half of organizations surveyed said teams are scaling generative AI across several business functions, with 1 in 5 organizations scaling across more than three different areas.
Software development, marketing and customer service are currently the most common functions enterprises are piloting for adoption.
Article top image credit: jacoblund via Getty Images
Don’t rush ethics in generative AI adoption plans
When leaders feel the pressure to adopt generative AI quickly, ethical frameworks and use case policies should guide their plans.
By: Lindsey Wilkinson• Published Sept. 5, 2023
Generative AI adoption is picking up steam, but enterprises are not doing enough to mitigate risk.
Just one-third of enterprises are working to curb generative AI-induced security risks and even fewer are addressing inaccuracies, according to a McKinsey survey of 913 respondents whose organizations have adopted AI in at least one use case.
Businesses that choose to steamroll ahead in hopes of gaining a competitive edge are likely to find a host of unintended consequences waiting for them.
“People that have never had AI strategies before are jumping in,” said Bill Wong, principal research director of AI and data analytics at Info-Tech Research Group, in a July live event. “The presidents and the CEOs are asking technology folks, ‘When are we going to get our AI app?’”
Whether generative AI solutions are interacting with customers or internal employees, enterprises have the responsibility to make sure tools deliver on the purpose that’s been communicated to the staff. Less than one-third of employees feel their employer has been transparent about its use of AI, according to an Asana survey of more than 4,500 knowledge workers in the U.S. and United Kingdom in July.
Even amid the rush to adopt, leaders have to commit to letting ethical frameworks and generative AI policies guide their plans. As a baseline, technology leaders should craft policies and frameworks that cover both the intentions and consequences of any given use case, according to Frank Buytendijk, distinguished VP analyst at Gartner.
“Ethics, most importantly, is contextual, and that means a policy is never complete,” Buytendijk said. “Don’t treat this as a policy, but as a process.”
Define, try, assess
To create ethical guidelines for generative AI use, leaders should start by clearly defining principles and values, such as how the organization believes the technology should work and what it should do, Buytendijk said.
This is the step that Justin Skelton, SVP and CIO at Dine Brands Global, is currently executing at his organization.
“Before we lean into a new technology, we want to have a better understanding of how to use it, [and] what the various use cases would be for the company,” Skelton said. “A lot of it is around the compliance, controls and things that we want to have in place."
Dine Brands Global, which operates Applebee’s, IHOP and Fuzzy’s Taco Shop, is thinking critically about how data would be stored and retained before opening the floodgates for generative AI use.
After establishing the guiding principles, the second step is to operationalize them through use case reviews, according to Buytendijk.
Once organizations set that foundation, technology teams need to monitor for unintended consequences, such as changes in the model’s behavior.
Even over a short period of time, large language model behavior can change substantially. Researchers from Stanford and the University of California, Berkeley found OpenAI’s GPT-3.5 and GPT-4 models became significantly less accurate in some areas.
How teams respond to the deteriorating performance of an AI model can depend on the use case and the severity of the change.
If a retailer observed a model’s behavior slightly declining in a use case where generative AI offered customers suggestions on what to buy, the company wouldn’t necessarily need to take the model down, said Adnan Masood, chief AI architect at UST.
“That’s where the precision doesn’t have to be as high,” Masood said. “Whatever the tolerance the business has for that requirement varies. In healthcare, you cannot have that much of an error measurement.”
Some of the largest companies are entering phases of generative AI adoption, from retailers to consulting firms and commercial real estate companies. Many others are dipping their toes in the water behind the scenes. Since ChatGPT’s public launch, more than 80% of the Fortune 500 companies have registered ChatGPT accounts, according to OpenAI.
With the amount of money and attention directed toward generative AI, leaders have to ground experiments and adoption plans in ethical guidelines and guardrails. To avoid creating an echo chamber, CIOs can call on other trusted leaders or create an advisory board to overlook experimentation and implementation plans, said Buytendijk.
Enterprises failing to address risks still have the option, for now, to stay the course. But as regulatory pressure continues, industry watchers expect some kind of mandatory requirements ensuring ethical, responsible use of generative AI.
Gartner predicts that by 2025, regulations will necessitate enterprise focus on AI ethics, transparency and privacy, according to research published in July. Buytendijk expects regulations to have two effects. On the one hand, the responsible use of AI is no longer an option, but, at the same time, regulatory compliance is a fairly low level of ethical thinking, according to Buytendijk.
“It’s good if it becomes a mandatory discussion, but it’s challenging because it becomes a more technical thing where people want to get it over with, instead of being intrinsically motivated to do the right thing,” Buytendijk said.
Article top image credit: FatCamera via Getty Images
Of course, it doesn’t have quite the power of “Leave the gun, take the cannoli.”, the brilliantly improvised line delivered by Richard Castellano as capo Peter Clemenza in The Godfather. But this simple phrase could be the most important advice you’ll receive when it comes to an investment in an AI/ML/BI project.
Look past the magic; don’t get caught up in FOMO; and, as with any other technology investment, only buy the solution that best solves a specific business problem. Stop being dazzled by 30-second video loops on LinkedIn and start looking for use cases that match one of your specific business problems. Like any other technology, AI has the power to be a transformative solution – but only when applied correctly. Like every other technology, it’s also going to look different in just a few years. And like every other rapidly growing sector, the crowd of providers in this space will thin out dramatically.
To say there’s a great deal of hype surrounding AI is beyond an understatement: right now, many providers in the sector likely have ‘hype’ as a line item in their profit plan. Buyers in today’s market are met by a perfect storm of forces – all aligned in their effort to push product, and not at all aligned with the ultimate success of the implementation.
Consultants great and small, pundits, talking heads, and all manner of advisors are admonishing business leaders to ‘get AI’. Read their articles, posts, whitepapers and columns carefully and you’ll find they all have an urgency, a level of alarmism, an admonishment to get something implemented. But what you won’t find are any specifics. No suggestions about building a business case; no advice on problem identification; no mention of success criteria. Simply select a provider, buy a platform, plug it in, and behold the magic of AI.
Not surprisingly, providers are completely on board with this mindset. Typically, there aren’t a lot of qualifying questions asked of potential buyers (What are you trying to accomplish? Is your organization ready for this technology? What will success look like?). But there’s no shortage of ‘low code’, ‘no code’, and ‘plug-and-play’ promises – and that sits well with the large number of non-technical buyers who have bypassed the typical IT-driven project approach. Project plans and implementation schedules are old school; besides, who needs ‘agile’ when you have ‘magic’?
We can’t forget the remaining component of this triumvirate: early-round investors determined to validate the [sizeable] valuations placed on AI start-ups that are often only one generation out of the laboratory environment. The primary lever for those valuations is revenue – so the often inexperienced management teams of their portfolio companies receive frequent reminders to focus on sales. Hitting a subscription target can greatly influence the value of those founders’ shares. But in this very short-term view, evaluating potential clients for readiness, ensuring all data is ingested, tracking model drift, and other post-sales activities just don’t seem to hold the same kind of magic.
This technology troika is self-supporting - all held aloft on the hot air of hype. Pundits and consultants feel safe in urging every company to get on the AI bandwagon with vague threats of being branded a technophobe if they even consider not complying; vendors aren’t turning away clients whose only qualification may be a demo from a self-serve website; and investors cite all the positive press as they pressure their fledgling companies to produce.
So it should come as no surprise that a large portion of companies taking on an AI/ML/BI project end up disappointed with the results. Many didn’t begin the project with a specific business problem to be solved. Others didn’t define what success would look like, other than in vague terms like ‘more revenue’ or ‘lower expenses’ or ‘greater insights’. Some companies installed large language models and simply assumed that their middle managers and front-line staff would somehow instinctively know what to do with these powerful tools – just as giving them scalpels would somehow turn them all into surgeons.
Perhaps the single largest contributor to this buyer’s remorse is the failure to include unstructured data (images, videos, chat logs, call recordings, scanned documents, etc.) in most installations. 80% of the new data produced every day is unstructured, yet it’s often overlooked - because platform providers are reluctant to make their product seem more complex, and because clients typically don’t press them on this issue. Your proprietary data is your ultimate competitive advantage, and while technology will continue to change, your data will be the constant you can rely on.
Based on a perception that other companies are making all this technology work, due to pressure from a board of directors to ‘leverage AI’, driven by a personal fear of not being relevant, many senior leaders find themselves unable to pump the brakes, to call an audible, to simply stop staring in awe at the magic of AI – and to focus instead on the solutions that are hidden in its glare.
Clemenza’s advice to ‘leave the gun’ was intended to symbolize walking away from everything surrounding the execution of Paulie. Everything that led up to that fateful event was now in the past, and the betrayal and uncertainty were now eliminated. There was a brighter future ahead, and Rocco had a chance to grab a part of that future – and it started by taking the cannoli.
This is a similar offer of advice: leave the magic, take the solution. And it’s an offer you can’t refuse.
To discover how Liberty Source, through its DataInFormation suite of solutions, helps you achieve superior performance from your advanced technology investment contact Joseph Bartolotta, CRO at [email protected].
AI adoption in software development is slow to take root
Just 1 in 4 organizations use AI in the software development lifecycle, but those who have adopted it are chasing efficiency and faster cycle times, a GitLab survey found.
By: Lindsey Wilkinson• Published Sept. 8, 2023
AI is not yet widely integrated in the software development lifecycle, but those who do use it turn to AI regularly, according to a GitLab survey of 1,000 individual contributors and leaders in IT operations, development and security.
Just one-quarter of organizations are using AI in software development, and nearly half of them use it multiple times a day according to the report published Tuesday. Adopters turned to AI to improve efficiency, speed cycle times and increase innovation.
Software developers are using the technology to power natural language chatbots and automated testing, as well as for generating summaries of changes made to code, GitLab survey data shows.
While generative AI adoption has put pressure on tech leaders to find high-value use cases in IT operations, the integration of AI into software development predates the rise of generative AI. Tech leaders can't ignore AI's ability to speed up the time it takes to write code, develop and test systems.
More than half of developers said their organizations are interested in AI-powered code generation and code suggestions, according to the GitLab report.
PwC fine-tuned OpenAI’s technology to serve as a conversational AI assistant for employees. The professional services company expects the tool to help workers throughout the organization with daily tasks as it is being rolled out in phases.
“My excitement is around generating code, code reviews, testing or quality assurance,” Scott Likens, global AI and innovation technology leader at PwC, told CIO Dive in August. “We can accelerate what our engineers are doing.”
Generative AI upgrades have improved popular tools, like GitHub’s Copilot. The AI-based assisted coding tool generated an average of 46% of code when developers used it in February, up from 27% in June. However, advancements in technology have raised new questions.
Apple reportedly restricted internal use of GitHub’s Copilot in May, citing concerns over confidential corporate data. There are also fears regarding copyright infringement when it is unclear what data sets were used to train a model and how a model came to its generated output.
“Some customers are concerned about the risk of IP infringement claims if they use the output produced by generative AI,” said Brad Smith, vice chair and president at Microsoft, and Hossein Nowbar, CVP and chief legal officer at Microsoft, in a blog post Thursday. “This is understandable, given recent public inquiries by authors and artists regarding how their own work is being used in conjunction with AI models and services.”
So long as customers use guardrails and content filters built into its products, Microsoft said it will assume responsibility for the potential legal risks involved. The added protection comes on the heels of an inquiry from the U.S. Copyright Office last week as it considers whether legislative or regulatory action on AI-generated content is warranted.
Article top image credit: AndreyPopov via Getty Images
Generative AI’s momentum casts uncertainty over the future of the IT service desk
Experts wonder what role these tier one technologists will play in IT departments moving forward.
By: Lindsey Wilkinson• Published Aug. 21, 2023
Generative AI has prompted widespread discussion about what role a human should play, if any, with the technology as it grows better at rote tasks. This is particularly true in lower-level IT service desk positions.
The majority of analysts and tech leaders say some job disruption across the economy is inevitable thanks to generative AI. Though when asked directly about the technology replacing workers, most are hesitant to entertain the idea — others say it’s possible but not yet advisable.
“It depends, but if you have a good knowledge base and it’s well trained, it can really replace — for the most part — tier one service desk specialists, which is perhaps a little bit scary for some people,” said Mark Tauschek, VP, infrastructure and operations research at Info-Tech Research Group.
Tier one specialists are often the front lines of an IT department, connecting with users and performing basic troubleshooting. The role is also an entry point for aspiring technologists, with the average tier one help desk worker bringing in an annual salary of around $48,000 in the U.S., according to a Glassdoor data analysis last updated in June.
Tier one service desk employees, despite their entry-level role, can have significant influence on the rest of the company’s perception of its IT department. Their delivery of solutions and treatment of end users is the foundation for employee experience.
“For us to not stay ahead of this would be really foolish.”
Jeremy Rafuse
VP and head of digital workplace at GoTo
Yet, experts wonder what role these specialists will play in technology departments moving forward, even if many analysts believe fears of imminent widespread job loss are overblown.
More than 1 in 4 professionals say workforce displacement is one of their concerns regarding generative AI implementation, according to a June Insight Enterprises survey conducted by The Harris Poll of 405 U.S. employees who serve as director or higher within their company.
Jeremy Rafuse, VP and head of digital workplace at software development company GoTo, has heard conversations among IT workers at the company about fears of job disruption as teams look to automate some tasks with generative AI.
“I think it’s hard not to when you’re talking about automating certain workloads instantly,” said Rafuse, who oversees the IT department at GoTo. “We are trying to promote that this is an opportunity to learn something new and not only is this going to potentially upskill your job so you could be working on different things, but it's going to create jobs that don’t even exist now.”
GoTo, a sister company of LastPass, has automated routine tasks within the service desk for years. More recently, the IT team has dedicated time to learn about generative AI and identify low-risk use cases, Rafuse said.
“We don’t want to just hide under the blanket,” Rafuse said. “But teams are aware, and they’re pretty optimistic about this being a chance to learn something new.”
In the service desk ecosystem, the company wants to use generative AI to analyze large amounts of data, identify trends related to satisfaction ratings and pinpoint customer pain points.
“For us to not stay ahead of this would be really foolish,” Rafuse said, a sentiment that most tech executives relate to.
Nearly all — 95% — of tech executives say they feel pressured to adopt generative AI in the next six months to a year, according to a July IDC survey of 900 respondents sponsored by Teradata. More than half said they were under “high or significant” levels of pressure.
Just because you can, doesn’t mean you should
Despite fears of generative AI technology replacing workers, the makers of popularly used models reject the narrative that LLMs and generative AI capabilities should — or can — stand in place of an employee.
“Humans should always be involved with AI systems,” Sandy Banerjee, GTM lead at Anthropic, said in an email. “Even where an AI system is answering questions or triaging tickets, there should be a [quality assurance] process where humans are keeping track of the system and evaluating its outputs.”
AI is not without its faults, after all. In research Anthropic published, the company found Claude models still get facts wrong and fill in gaps of knowledge with fabrication, emphasizing models should not be used in high-stakes situations or when an incorrect answer could cause harm.
Researchers from Stanford and UC Berkeley found models made by OpenAI weren’t necessarily getting better over time. In some cases they encountered, performance and accuracy were significantly worse, signaling a need for continuous monitoring.
Even so, models from generative AI start-up Anthropic are available for enterprise use off-the-shelf and through third-party services, such as Slack. As providers of generative AI models continue to release updates in beta and refer to tools as a work-in-progress, they are simultaneously aiming for enterprise use.
OpenAI equated its code interpreter plugin to a “very eager junior programmer working at the speed of your fingertips,” in a blog post when it first unveiled ChatGPT plugins in March, but the tool can still generate incorrect information or “produce harmful instructions or biased content,” according to ChatGPT’s web version homepage.
Despite vendor warnings, decisions to replace workers with generative AI hinge on what it costs and how organizations value particular roles. The National Eating Disorder Association made headlines when it decided to shut down its human-run national helpline and, instead, use a chatbot called Tessa. After users posted their experiences with Tessa recommending weight loss and diet restriction on social media platforms, the nonprofit pulled the tool in June.
“If people don’t like it, they will avoid it."
Chris Matchett
Senior director analyst at Gartner
At GoTo, there have been conversations and meetings to set expectations for what generative AI technology can and cannot do. Rafuse underlined that the technology should be used as a tool, but not something that can be relied on because of its ability to get information wrong.
“You know sometimes you only have that one chance, and if you tell somebody the wrong thing, they’re not going to come back to you again,” Rafuse said. “First impressions last forever.”
Inaccurate information and bias aren’t the only risks. Before and throughout rollouts, Melanie Siao-Si, VP of international care and services at GoDaddy, said the team made sure to communicate usage guidelines to ensure employees did not expose proprietary information.
The FTC is currently investigating OpenAI to determine whether the company engaged in unfair or deceptive data security practices.
“Especially in the care organization, we’re learning our way into this, hence the experimentation,” Siao-Si told CIO Dive. “Obvious challenges include data security and privacy and potentially threat actors using the same technology to target our customer care organization.”
State of play
In the eight months since ChatGPT debuted, most tech leaders characterize the perception of generative AI technology as somewhere in-between the peak of inflated expectations and the trough of disillusionment when viewed in Gartner's hype cycle.
The hype cycle identifies five key phases: the innovation trigger, peak of inflated expectations, trough of disillusionment, slope of enlightenment and the plateau of productivity. Gartner placed generative AI on the peak of inflated expectations for 2023 on Wednesday.
“Many technologies really follow the cycle and I don’t think generative AI will be any different,” Justin Falciola, SVP and chief technology and insights officer at Papa Johns, said. “The only thing is no one knows exactly where you are.”
Tech leaders are tasked with pushing through the hype to find value. At GoDaddy, Siao-Si is leading the customer care team’s AI experimentation as the company looks to improve customer experience by offering a variety of ways to make contact and solve questions or queries.
In product pages, the team manages a prompt library to help customers formulate different ways to set up their business, according to Siao-Si. AI-powered bots also assist customers fine-tune what kind of help they need before receiving guidance.
Enterprise software provider Atlassian updated its capabilities to support tone adjustments in responses produced by its generative AI tool in Jira Service Management, the company announced in April among a slew of updates related to its Atlassian Intelligence layer.
In a GIF product demonstration, a human agent asks Atlassian Intelligence to adjust the tone of their response to be more empathetic, altering, “Your new laptop will arrive by next Tuesday,” to a five sentence response. Options to adjust tone also included reassuring, casual, friendly and professional.
Atlassian
“The reminder here is that the user is always in control,” Sherif Mansour, head of product, AI at Atlassian, told CIO Dive in a demonstration in April. “They can cancel and not accept the suggestions, [or] they can accept, edit and modify it.”
As businesses prepare systems for generative AI implementation, leaders will have to contend with knowledge gaps throughout the organization and differences in user preference. Inadvertent impacts on customer experience can also occur if businesses don’t provide a seamless transition from chatbot to human touchpoint.
“If people don’t like it, they will avoid it,” Chris Matchett, senior director analyst at Gartner, said. “That’s why some of the advice we give them is to not have it as a bouncer or a doorman to the club that you have to get past before you can speak to a human.”
Generative AI-powered self-service channels should be implemented as an option for end users, Matchett said. “Let people choose it when they want, rather than force it down people’s throats.”
Article top image credit: Vnoam3d via Getty Images
Caution tempers enterprise enthusiasm for generative AI
The road to adoption of the new technology runs through the existing tech stack.
By: Roberto Torres• Published July 26, 2023
Almost 3 in 5 executives describe their leadership teams as strong advocates for the adoption of generative AI, according to a Capgemini survey released this month. The IT services and consulting firm polled executives at 1,000 global organizations.
Despite interest in the technology, holdouts remain. Just 2 in 5 organizations are taking a “wait-and-watch” stance as the technology develops.
Organizations interested in deploying the technology already have specific use cases in mind. Chatbots, data analytics and text processing are the three applications deemed most relevant by executives.
Executives have grappled with a monthslong rush toward new products and features in the emerging category of generative AI. And the deluge, analysts say, is only getting started.
Executives are closely watching the potential productivity enhancements that stem from the technology, but also the absence of guardrails and explainability, according to Scott Bickley, advisory practice lead at Info-Tech Research Group.
"I think CEOs should be thinking about how they can responsibly evaluate and implement generative AI solutions, or push them into solutions where their risk is very low," said Bickley.
Barriers to adoption have emerged almost as rapidly as the technology has developed. Governance, cost, data quality and maturity of the underlying IT stack all play a role in slowing down enterprise adoption plans.
But AI implementation has a higher chance of success when viewed as an expansion of the existing tech stack. Microsoft is among a long list of vendors baking generative AI capabilities into their product suite.
"Most organizations are going to benefit through looking at how [generative AI] can be additive from a productivity perspective," said Bickley. "Probably within enterprise software that they're already using today and then extending that use."
Article top image credit: iStock / Getty Images Plus via Getty Images
Businesses hunt for ROI in generative AI deployments
Measuring the time and productivity gains from new technology implementations is critical.
By: Lindsey Wilkinson• Published June 29, 2023
The most compelling promise of generative AI is the implication that adoption brings monetary benefit for businesses — and ultimately the whole economy.
Productivity boosts from AI-powered developer tools and services could increase global GDP by more than $1.5 trillion by 2030, according to GitHub research published Tuesday. The open-source software provider and code platform, itself a maker of AI-based solutions such as GitHub Copilot, calculated the potential GDP boost by analyzing a sample of around 935,000 users and their individual experiences with related productivity boosts.
Because generative AI is in a nascent state, projected impacts on revenue, and therefore the economy, are difficult to gauge accurately. Tech leaders are leaning on what they know today, as they experiment with the technology and work to set expectations across the company.
At IT networking company Juniper Networks, CIO Sharon Mandell has worked on AI technologies throughout her career. She typically has seen AI as a way to get more mileage out of tech spend.
“If you can find tools that help shift costs, it can free up money in one place to start spending in new ones, which is something I think most CIOs relate to,” Mandell said. “I think our careers are very much about: 'How do I do all the things I'm doing today for less money than I did yesterday.'”
Regardless of the project, Mandell said ROI is almost always a factor and metric used to measure success, especially when the CFO is a part of the conversation. For generative AI implementation, the hard part is turning the value of time and productivity into an ROI measurement.
To get the full picture, Mandell said there needs to be a focus on measuring productivity, becoming more agile and accelerating products to market speed. “That ultimately turns into dollars,” Mandell said.
With macroeconomic indicators still showing mixed signals, the idea of turning tech into more efficiency is appealing to executives and board members.
Jennifer Piepszak, co-CEO of consumer and community banking at JPMorgan Chase, called it a “game of inches” at a conference earlier this month.
“Every day, we will just get a little bit better and leverage tools like AI and ML to be able to do that,” Piepszak said during the conference.
Companies could expect to see ROI from explainable AI in two to four years, according to Forrester data from October. Yet, 2 in 3 IT pros said their organization planned to increase spending on emerging technologies this year.
In the software development space, AI-based pair programming has been billed as a key accelerator of productivity. But businesses are also looking at other areas for AI implementation outside of IT.
“A lot of people are looking into sales and marketing to create more customized types of proposals and to create better, more personalized user experiences,” Bill Wong, principal research director of AI and data analytics at Info-Tech Research Group, said. “There’s a lot of focus in the sales, marketing and customer experience areas.”
ROI isn’t the only metric businesses are using to measure success of generative AI projects. Proof of concept metrics also include scalability, ease of use, quality of response, accuracy of response and explainability or total cost of ownership, Wong said.
Article top image credit: bob_bosewell via Getty Images
How CIOs are driving generative AI adoption
From the IT service desk to the software development pipeline and even outside of IT, generative AI is positioned to impact the way work gets done. By leaning on acceptable use policies and ethical frameworks, CIOs can confidently move forward even as questions linger related to regulatory action.
included in this trendline
More companies are ramping up generative AI pilots
Don’t rush ethics in generative AI adoption plans
Generative AI’s momentum casts uncertainty over the future of the IT service desk
Our Trendlines go deep on the biggest trends. These special reports, produced by our team of award-winning journalists, help business leaders understand how their industries are changing.