Generative AI isn’t just a technology or a business case — it is a key part of a society in which people and machines work together.
Generative AI can learn from existing artifacts to generate new, realistic artifacts (at scale) that reflect the characteristics of the training data but don’t repeat it. It can produce a variety of novel content, such as images, video, music, speech, text, software code and product designs.
Generative AI uses a number of techniques that continue to evolve. Foremost are AI foundation models, which are trained on a broad set of unlabeled data that can be used for different tasks, with additional fine-tuning. Complex math and enormous computing power are required to create these trained models, but they are, in essence, prediction algorithms.
Today, generative AI most commonly creates content in response to natural language requests — it doesn’t require knowledge of or entering code.
Foundation models, including generative pretrained transformers (which drive ChatGPT), are among the AI architecture innovations that can be used to automate, augment humans or machines, and autonomously execute business and IT processes.
The benefits of generative AI include faster product development, enhanced customer experience, and improved employee productivity, but the specifics depend on the use case. End users should be realistic about the value they are looking to achieve, especially when using a service as is, which has major limitations. Generative AI creates artifacts that can be inaccurate or biased, making human validation essential and potentially limiting the time it saves workers. Gartner recommends connecting use cases to KPIs to ensure that any project either improves operational efficiency, creates net new revenue, or creates better experiences.
In a recent Gartner webinar poll of more than 2,500 executives, 38% indicated that customer experience and retention are the primary purposes of their generative AI investments. This was followed by revenue growth (26%), cost optimisation (17%), and business continuity (7%).
The risks associated with generative AI are significant and rapidly evolving. A wide array of threat actors have already used the technology to create “deep fakes” or copies of products, and generate artifacts to support increasingly complex scams.
ChatGPT and other tools like it are trained on large amounts of publicly available data. They are not designed to be compliant with General Data Protection Regulation (GDPR) and other copyright laws, so it’s imperative to pay close attention to your enterprises’ uses of the platforms.
Oversight risks to monitor include:
- Lack of transparency. Generative AI and ChatGPT models are unpredictable, and not even the companies behind them always understand everything about how they work.
- Accuracy. Generative AI systems sometimes produce inaccurate and fabricated answers. Assess all outputs for accuracy, appropriateness and actual usefulness before relying on or publicly distributing information.
- Bias. You need policies or controls in place to detect biased outputs and deal with them in a manner consistent with company policy and any relevant legal requirements.
- Intellectual property (IP) and copyright. There are currently no verifiable data governance and protection assurances regarding confidential enterprise information. Users should assume that any data or queries they enter into the ChatGPT and its competitors will become public information, and we advise enterprises to put in place controls to avoid inadvertently exposing IP.
- Cybersecurity and fraud. Enterprises must prepare for malicious actors’ use of generative AI systems for cyber and fraud attacks, such as those that use deep fakes for social engineering of personnel, and ensure mitigating controls are put in place. Confer with your cyber-insurance provider to verify the degree to which your existing policy covers AI-related breaches.
- Sustainability. Generative AI uses significant amounts of electricity. Choose vendors that reduce power consumption and leverage high-quality renewable energy to mitigate the impact on your sustainability goals.
The field of generative AI will progress rapidly in both scientific discovery and technology commercialisation, but use cases are emerging quickly in creative content, content improvement, synthetic data, generative engineering, and generative design.
In-use, high-level practical applications today include the following.
- Written content augmentation and creation: Producing a “draft” output of text in a desired style and length
- Question answering and discovery: Enabling users to locate answers to input, based on data and prompt information
- Tone: Text manipulation, to soften language or professionalize text
- Summarisation: Offering shortened versions of conversations, articles, emails and webpages
- Simplification: Breaking down titles, creating outlines and extracting key content
- Classification of content for specific use cases: Sorting by sentiment, topic, etc.
- Chatbot performance improvement: Bettering “sentity” extraction, whole-conversation sentiment classification and generation of journey flows from general descriptions
- Software coding: Code generation, translation, explanation and verification
Emerging use cases with long-term impacts include:
- Creating medical images that show the future development of a disease
- Synthetic data helping augment scarce data, mitigate bias, preserve data privacy and simulate future scenarios
- Applications proactively suggesting additional actions to users and providing them with information
- Legacy code modernisation
Your workforce is likely already using generative AI, either on an experimental basis or to support their job-related tasks. To avoid “shadow” usage and a false sense of compliance, Gartner recommends crafting a usage policy rather than enacting an outright ban.
Keep the policy simple — it can be as streamlined as three don’ts and two do’s if using ChatGPT or other off-the-shelf model:
- Don’t input any personally identifiable information.
- Don’t input any sensitive information.
- Don’t input any company IP.
- Do turn off history if using external tools (like ChatGPT) that enable that choice.
- Do closely monitor outputs, which are subject to sometimes subtle but meaningful hallucinations, factual errors, and biased or inappropriate statements.
If the company is using its own instance of a large language model, the privacy concerns that inform limiting inputs go away. However, the need to keep a close eye on outputs remains.
Many enterprises have generative AI pilots for code generation, text generation, or visual design underway. To establish a pilot, you can take one of three routes:
- Off-the-shelf. Use an existing foundational model directly by inputting prompts. You might, for example, ask the model to create a job description for a software engineer or suggest alternative subject lines for marketing emails.
- Prompt engineering. Program and connect software to a foundational model and leverage it. This technique, which is the most common of the three, allows you to use public services while protecting IP and leveraging private data to create more precise, specific, and useful responses. Building an HR benefits chatbot that answers employee questions about company-specific policies is an example of prompt engineering.
- Custom. Building a new foundational model goes beyond the reach of most companies, but it’s possible to tune a model. This involves adding a layer or proprietary data in a way that significantly alters the way the foundational model behaves. While costly, customising a model offers the highest level of flexibility.
The Generative AI marketplace is on fire. Beyond the big platform players, there are hundreds of specialty providers funded by ample venture capital and a wave of new open-source models and capabilities. Enterprise application providers, such as Salesforce and SAP, are building LLM capabilities into their platforms. Organisations like Microsoft, Google, Amazon Web Services (AWS), and IBM have invested hundreds of millions of dollars and massive compute power to build the foundational models on which services like ChatGPT and others depend.
Gartner considers the current major players to be as follows:
- Google has two large language models, Palm, a multimodal model, and Bard, a pure language model. They are embedding their generative AI technology into their suite of workplace applications, which will immediately get it in the hands of millions of people.
- Microsoft and OpenAI are marching in lockstep. Like Google, Microsoft is embedding generative AI technology into its products, but it has the first-mover advantage and buzz of ChatGPT on its side.
- Amazon has partnered with Hugging Face, which has a number of LLMs available on an open-source basis, to build solutions. Amazon also has Bedrock, which provides access to generative AI on the cloud via AWS, and has announced plans for Titan, a set of two AI models that create text and improve searches and personalisation.
- IBM has multiple foundation models and a strong ability to fine-tune both its own and third-party models by injecting data and retraining and employing the model.
This blog is an extract from the Gartner site; comments and credits should go to the original authors from Gartner.