Venture capital firms have invested over $1.7 billion in generative AI solutions over the last three years, with AI-enabled drug discovery and AI software coding receiving the most funding.
“Early foundation models like ChatGPT focus on the ability of generative AI to augment creative work, but by 2025, we expect more than 30% — up from zero today — of new drugs and materials to be systematically discovered using generative AI techniques,” says Brian Burke, Research VP for Technology Innovation at Gartner. “And that is just one of numerous industry use cases.”
Five industry use cases for generative AI
Generative AI can explore many possible designs of an object to find the right or most suitable match. It not only augments and accelerates design in many fields, it also has the potential to “invent” novel designs or objects that humans may have missed otherwise.
Marketing and media are already feeling the impacts of generative AI. Gartner expects:
- By 2025, 30% of outbound marketing messages from large organizations will be synthetically generated, up from less than 2% in 2022.
- By 2030, a major blockbuster film will be released with 90% of the film generated by AI (from text to video), from 0% of such in 2022.
Still, AI innovations are generally accelerating, creating numerous use cases for generative AI in various industries, including the following five.
No. 1: Generative AI in drug design
No. 2: Generative AI in material science
No. 3: Generative AI in chip design
No. 4: Generative AI in synthetic data
No. 5: Generative design of parts
Embedding the right technologies to unleash generative AI
Generative AI enables systems to create high-value artifacts, such as video, narrative, training data and even designs and schematics.
Generative Pre-trained Transformer (GPT), for example, is the large-scale natural language technology that uses deep learning to produce human-like text. The third generation (GPT-3), which predicts the most likely next word in a sentence based on its absorbed accumulated training, can write stories, songs and poetry, and even computer code — and enables ChatGPT to do your teenager’s homework in seconds.
Beyond text, digital-image generators, such as DALL·E 2, Stable Diffusion and Midjourney, can generate images from text.
There are a number of AI techniques employed for generative AI, but most recently, foundation models have taken the spotlight.
Foundation models are pretrained on general data sources in a self-supervised manner, which can then be adapted to solve new problems. Foundation models are based mainly on transformer architectures, which embody a type of deep neural network architecture that computes a numerical representation of training data.
Transformer architectures learn context and, thus, meaning, by tracking relationships in sequential data. Transformer models apply an evolving set of mathematical techniques, called attention or self-attention, to detect subtle ways even distant data elements in a series influence and depend on each other.
Don’t forget the risks of generative AI
Work with security and risk management leaders to proactively mitigate the reputational, counterfeit, fraud and political risks that malicious uses of generative AI present to individuals, organizations and governments.
Also consider implementing guidance on the responsible use of generative AI through a curated list of approved vendors and services, prioritizing those that strive to provide transparency on training datasets and appropriate model usage, and/or offer their models in open source.