NVIDIA unveils AI Foundations, its customizable Gen-AI cloud service
The age of enterprise AI has come crashing down upon us in recent months. Public infatuation with ChatGPT since its release last November has opened the floodgates of corporate interest and set off an industry-wide land grab with every major tech entity vying to stake their claim in this burgeoning market by incorporating generative AI features into their existing products. Heavyweights including Google, Microsoft, Meta, and Baidu are already jockeying their Large Language Models (LLMs) for market dominance, while everybody else, from Adobe and AT&T to BMW and BYD, scrambles to find uses for the revolutionary technology.
NVIDIA’s newest cloud services offering, AI Foundations, will allow businesses lacking the time and money to develop their own models from scratch to “to build, refine and operate custom large language models and generative AI models that are trained with their own proprietary data and created for their unique domain-specific tasks.”
These models include NeMo, NVIDIA’s text-to-image generation engine and DALL-E 2 competitor; BioNemo, a drug and molecule discovery-focused fork of the NeMo model built for the medical research community; and Picasso, an AI capable of generating images, video and “3D applications… to supercharge productivity for creativity, design and digital simulation,” according to Tuesday’s release. Both flavors of NeMo are still in early access and Picasso remains in private preview despite Tuesday’s news, so it’ll be a minute before any of them are released to the wider public. NeMo and Picasso both operate on NVIDIA’s new DGX Cloud platform and will eventually be accessible through an online portal.
These enterprise-facing cloud-based services function as blank templates that companies can pour their own databases into to train on specifically. So while something like Google’s Bard AI is trained on (and will pull from) data from all over the internet to provide a generated response, NVIDIA’s AIs will allow companies to tailor a similarly-styled LLM to their own specific needs using their own proprietary data — think of ChatGPT but solely for one Pharma company’s research division. The models can be trained with anywhere from 8 billion to 530 billion parameters, which is more than triple the 185 billion parameters GPT-3.5 provided.
Imagine StableDiffusion, but trained on Getty Images with Getty’s actual permission. NVIDIA announced such a system Tuesday built on the NeMo cloud service: a series of responsibly sourced text-to-image and text-to-video models, “trained on Getty Images’ fully licensed assets,” Tuesday’s press release read. “Getty Images will provide royalties to artists on any revenues generated from the models.”
BioNeMo, uses the same technical underpinnings as NeMo itself, but is geared entirely towards drug and molecule discovery. Per Tuesday’s release, Bio NeMo, “enables researchers to fine-tune generative AI applications on their own proprietary data, and to run AI model inference directly in a web browser or through new cloud APIs that easily integrate into existing applications.”
“BioNeMo is dramatically accelerating our approach to biologics discovery,” Peter Grandsard, executive director of Biologics Therapeutic Discovery at Amgen said in a statement. “With it, we can pre-train large language models for molecular biology on Amgen’s proprietary data, enabling us to explore and develop therapeutic proteins for the next generation of medicine that will help patients.”
Six models will be available at launch including DeepMind’s AlphaFold2, Meta AI’s ESM2 and ESMFold predictive models ProtGPT-2, DiffDock and MoFlow. According to the companies, incorporating AI-based predictive models helped reduce the time to train “five custom models for molecule screening and optimization” using Amgen’s proprietary data on antibodies from the usual three months down to four weeks.
NVIDIA announced a similar partnership with Shutterstock as well. The photography site will use Picasso to generate 3D objects from text prompts as a new feature within Creative Flow, with plans to offer it on Turbosquid.com and NVIDIA’s forthcoming Omniverse platform.
“Our generative 3D partnership with NVIDIA will power the next generation of 3D contributor tools, greatly reducing the time it takes to create beautifully textured and structured 3D models,” Shutterstock CEO Paul Hennessy, said in the release. “This first of its kind partnership furthers our strategy of leveraging Shutterstock’s massive pool of metadata to bring new products, tools, and content to market. By combining our 3D content with NVIDIA’s foundation models, and utilizing our respective marketing and distribution platforms, we can capitalize on an extraordinarily large market opportunity.”
NVIDIA is also partnering with Adobe as part of the latter’s Content Authenticity Initiative, which seeks to enhance transparency and accountability within the generative AI training process. The CAI’s proposals include a “do not train” list, similar to robot.txt but for images and multimodal content, and persistent origination tags that will detail whether a piece is AI generated and from where. The two companies have also announced plans to incorporate many of Picasso’s features directly into Adobe’s suite of editing software including Photoshop, Premiere Pro and After Effects.