ChatGPT, Claude, Bard, and other public-facing generative AI chatbots based on large language models (LLMs) are nice enough, but they’re general-purpose and not well integrated into enterprise workflows. Employees either have to go to a separate app, or companies have to spend time and effort adding the functionality to their applications via application programming interfaces. Plus, in order to use ChatGPT and other genAI chatbots well, employees have to learn prompt engineering.
Embedded generative AI, by comparison, promises to put the new AI functionality right where employees need it most — into their existing word processing applications, spreadsheets, email clients, and other enterprise productivity software — without any work on the part of their employers. If it’s done right, the new AI functionality should be seamless and intuitive to users, allowing them to get all the benefits without genAI’s steep learning curve.
Based on a recent survey of technology decision-makers in North America and the UK, Forrester predicts that by 2025, nearly all enterprises will be using generative AI for communications support, including writing and editing. In fact, 70% of the survey respondents said they were already using generative AI for most or all of their writing or editing.