- AWS entered the generative AI enterprise chatbot fray Tuesday, unveiling its multifunctional natural-language assistant Amazon Q at AWS re:Invent 2023.
- The assistant, built on the hyperscaler’s multimodel Bedrock platform, is trained on “17 years worth of AWS knowledge, so it can transform the way you think, optimize and operate applications and workloads on AWS,” CEO Adam Selipsky said during the event.
- Q has code generation, business intelligence and customer service functions, and it integrates with Salesforce, Microsoft, Google, Slack and other enterprise applications, Matt Wood, VP of product at AWS, said during the keynote.
The hyperscaler battle for cloud dominance spilled over into the generative AI space earlier this year, as Microsoft and then Google deployed enterprise-grade chatbot tools.
AWS will integrate Q with Amazon’s CodeWhisperer developer tool, QuickSight analytics platform and Connect contact center solution, Selipsky said. Currently, it is available to preview via the AWS console in the company’s U.S. East and West Regions.
The multifunctional assistant beefs up AWS’ growing arsenal of generative AI capabilities, centered around Bedrock, a platform housing LLMs built by Amazon, Anthropic, AI21 Labs, Cohere, Meta and Stability AI.
AWS doubled down on its multi-model strategy and bolstered its AI infrastructure Tuesday, deepening its existing partnerships with AI startup Anthropic.
Anthropic will use Amazon’s Trainium and Inferentia chip technologies to train future generations of its models, Selipsky said. In return, Bedrock customers will gain early access to Anthropic model enhancements and fine-tuning capabilities.
Amazon’s $4 billion minority investment in Anthropic, announced in September, followed Microsoft’s alliance with OpenAI, the company that sparked the generative AI boom with the release of ChatGPT last year. Anthropic previously garnered funding from Google, Salesforce and Zoom.
Despite its commitment to the AI startup, AWS remains committed to diversifying its model portfolio.
“Customers are finding that different models actually work better for different use cases or on different sets of data,” Selipsky said.
“You need a real choice of model providers as you decide who's got the best technology, and also who has the dependability that you need in a business partner — I think the events of the past 10 days have made that very clear,” he added, referencing OpenAI’s recent leadership upheaval.