As businesses increasingly integrate artificial intelligence into their workflows and products, there is a growing demand for tools and platforms that make it easier to create, test, and deploy machine learning models. This category of platforms — popularly known as machine learning operations (MLOps) — is already a crowded space, with startups like InfuseAI, Comet, Arrikto, Arize, Galileo, Tecton, and Diveplane, not to mention the offerings from incumbents like Google Cloud, Azure, and AWS.
VESSL AI Carves Out a Niche
One South Korean MLOps platform called VESSL AI is trying to carve out a niche for itself by focusing on optimizing GPU expenses using hybrid infrastructure that combines on-premise and cloud environments. And the startup has now raised $12 million in a Series A funding round to speed up the development of its infrastructure, aimed at companies that want to develop custom large language models (LLMs) and vertical AI agents.
Key Features of VESSL’s Platform
VESSL AI’s platform offers four main features:
- VESSL Run: automates AI model training
- VESSL Serve: supports real-time deployment
- VESSL Pipelines: integrates model training and data preprocessing to streamline workflows
- VESSL Cluster: optimizes GPU resource usage in a cluster environment
The Challenge of Optimizing GPU Expenses
The company’s founders previously had stints at Google, mobile game company PUBG, and some AI startups. They discovered that they could make the process more efficient — and notably, cheaper — by leveraging a hybrid infrastructure model.
"The immense amount of work involved in developing and utilizing machine learning tools was a major pain point for us," said co-founder and CEO Jaeman Kuss An.
How VESSL AI’s Multi-Cloud Strategy Works
VESSL AI’s multi-cloud strategy enables the use of GPUs from a variety of cloud service providers like AWS, Google Cloud, and Lambda. This system automatically selects the most cost-effective and efficient resources, significantly reducing customer costs.
"By leveraging multiple cloud providers, we can reduce GPU expenses by as much as 80%," said An.
Enterprise Customers
The company already has 50 enterprise customers, which include some big names like Hyundai; LIG Nex1, a South Korean aerospace and weapons manufacturer; TMAP Mobility, a mobility-as-a-service joint venture between Uber and Korean telco company SK Telecom; as well as tech startups Yanolja, Upstage, ScatterLab, and Wrtn.ai.
Partnerships with Oracle and Google Cloud
VESSL AI has also strategically partnered with Oracle and Google Cloud in the U.S.
Growth Plans
The startup plans to use the funding to further develop its platform and expand its team. It currently has 35 staff in South Korea and at a San Mateo office in the U.S.
MLOps Critical for Reliability and Performance
Enterprise companies find MLOps critical for reliability and performance, as it enables them to streamline their workflows and optimize resources.
VESSL AI is one of several startups that are trying to make machine learning operations more accessible to businesses. With its focus on optimizing GPU expenses using hybrid infrastructure, VESSL AI hopes to carve out a niche in the crowded MLOps space.
Investors
The investors for the Series A round include:
- A Ventures
- Ubiquoss Investment
- Mirae Asset Securities
- Sirius Investment
- SJ Investment
- Google Cloud
With its innovative approach and growing customer base, VESSL AI is well-positioned to make a significant impact in the MLOps space.
Conclusion
The machine learning operations (MLOps) platform market is becoming increasingly crowded, with several startups vying for attention. However, VESSL AI’s focus on optimizing GPU expenses using hybrid infrastructure sets it apart from its competitors. With its growing customer base and strategic partnerships with Oracle and Google Cloud, VESSL AI is well-positioned to make a significant impact in the MLOps space.
Additional Resources
Related Topics
- Machine Learning Operations (MLOps): a category of platforms that make it easier to create, test, and deploy machine learning models
- GPU Expenses: the cost of using graphics processing units (GPUs) for machine learning operations
- Hybrid Infrastructure: a combination of on-premise and cloud-based infrastructure used for machine learning operations