H2O AI launches H2OGPT and LLM Studio to help companies make their own chatbots
Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
California-based H2O AI, a company helping enterprises with AI system development, today announced the launch of two fully open-source products: a generative AI product called H2OGPT and a no-code development framework dubbed LLM Studio.
The offerings, available starting today, provide enterprises with an open, transparent ecosystem of tooling to build their own instruction-following chatbot applications similar to ChatGPT.
It comes as more and more companies look to adopt generative AI models for business use cases but remain wary of the challenges associated with sending sensitive data to a centralized large language model (LLM) provider that serves a proprietary model behind an API.
Many companies also have specific needs for model quality, cost and desired behavior — which closed offerings fail to deliver.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
How do H2OGPT and LLM Studio help?
As H2O explains, the no-code LLM Studio provides enterprises with a fine-tuning framework where users can simply go in, choose from fully permissive, commercially usable code, data and models — ranging from 7 to 20 billion parameters, 512 tokens — and start building a GPT for their needs.
“One can take open assist–type datasets and start using the base model to build a GPT,” Sri Ambati, the cofounder and CEO of H2O AI, told VentureBeat. “They can then fine-tune it for a specific use case using their own dataset, as well as add additional tuning filters such as specifying the maximum prompt length and answer length or comparison with GPT.”
“Essentially,” he said, “with every click of a button, you’re able to build your own GPT and then publish it back into Hugging Face, which is open source, or internally on a repo.”
Meanwhile, H2OGPT is H2O’s own open-source LLM — fine-tuned to be plugged into commercial offerings. It’s just like how OpenAI provides ChatGPT but, in this case, the GPT adds a much-needed layer of introspection and interpretability that allows users to ask “why” a certain answer is given.
Users on H2OGPT can also choose from a variety of open models and datasets, see response scores, flag issues and adjust out length, among other things.
“Every company needs its own GPT. H2OGPT and H2O LLM Studio will empower all our customers and communities to make their own GPT to help improve their products and customer experiences,” Ambati said. “Open source is about freedom, not just free. LLMs are far too important to be owned by a few large tech giants and nations. With this significant contribution, all our customers and community will be able to partner with us to make open-source AI and data the most accurate and powerful LLMs in the world.”
Currently, roughly half a dozen enterprises are forking the core H2OGPT project to build their own GPTs. However, the Ambati was unwilling to disclose specific customer names at this time.
Open source or not: Matter of debate
H2O’s offerings come more than a month after Databricks, a known lakehouse platform, made a similar move by releasing the code for an open-source large language model (LLM) called Dolly.
“With 30 bucks, one server and three hours, we’re able to teach [Dolly] to start doing human-level interactivity,” said Databricks CEO Ali Ghodsi.
But as the efforts to democratize generative AI in an open and transparent way continue, many still vouch for the closed approach, starting with OpenAI — which has not even declared the contents of its training set for GPT-4 — citing competitive landscape and safety implications.
“We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, AI — AGI — is going to be extremely, unbelievably potent, then it just does not make sense to open source,” Ilya Sutskever, OpenAI’s chief scientist and cofounder, told the Verge in an interview. “It is a bad idea … I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing AI is just not wise.”
Ambati, for his part, agreed with the possibility of evil use of AI but also emphasized that there are more people willing to do good with AI. The misuse, he said, could be handled with safeguards like AI-driven curation or a check of sorts.
“We have enough humans wanting to do good with AI with open source. And that’s kind of why democratization is a necessary force in this manner,” he noted.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.