The Trending Commercial On-Premise Cloud by Oxide Computer
Will It Reshape Generative AI Landscape
On 26 October 2023, Oxide Computer Company, with a mission to deliver the developer experience and operational efficiencies of the public cloud to on-premises environments, introduced the industry's first commercial Cloud Server. This comprehensive rack-scale system integrates one-stop hardware and software for on-premises cloud computing. The company has also secured a significant $44 million Series A funding round, led by Eclipse and joined by Intel Capital, Riot Ventures, Counterpart Ventures, and Rally Ventures, to expedite production for Fortune 1000 companies. With this latest funding, Oxide has raised a total of $78 million in financing thus far.
Oxide Computer has made a significant breakthrough in the technology landscape by introducing its first commercial on-premise cloud solution. As cloud computing is in high demand by digital transformation and generative AI, cloud computing will witness a surge due to factors such as business models and functionality.
Worldwide, spending by businesses on cloud computing infrastructure is forecast to $1 trillion for the first time in 2024. The global generative AI market size is projected to grow from $43.87 billion in 2023 to $667.96 billion by 2030, at a CAGR of 47.5% during the forecast period. In addition, driven by cloud computing, the increasing need for generative AI products has the potential to generate around $280 billion of new software revenue. This growth can be attributed to the popularity of specialized assistants, novel infrastructure products, and coding acceleration tools known as copilots. Major companies such as Amazon Web Services, Microsoft, Google, and Nvidia stand to gain the most from this trend, as businesses increasingly migrate their workloads to the public cloud.
On-premise Cloud – A New Era Unfolds
Cloud computing is well-suited for executing the algorithm during its training phase when data and computational demands are typically at their highest. It provides the flexibility to scale resources dynamically, avoiding the need to invest in infrastructure specifically tailored to the maximum training load. Cloud computing for AI offers scalability, cost-effectiveness, and improved accessibility, and enables companies to initiate smaller generative AI projects as trials.** Over the decade, SaaS companies have become multi-billion-dollar companies due to cost-effective cloud-based solutions. However, many of these companies are reaching a size that running in the cloud is too expensive in the long term. In this case, concerns about cost-saving, better performance, and data privacy create a market gap for on-perm solutions.
Oxide Computer’s business model is to sell data center appliance called Oxide Cloud Computer that combines computing, storage and network in a one-stop package integrated with management software. The system is also designed to provide computing power as user-friendly as public cloud, in which administrators can perform tasks such as provisioning new virtual machines with clicks in the interfaces. The key selling point of Oxide products is that they provide similar ease of use as the public cloud. Setting up a traditional cloud infrastructure usually takes weeks or months, while Oxide reduces the burden to just a few hours. The built-in management software allows users to allocate hardware resources for a project through a drag-drop interface.
As SaaS companies are dominating the cloud-based applications market, Oxide Computer is leading on-premise solutions to the forefront. Oxide aims to cater to businesses that prefer to manage their servers. They plan to create servers using standard x86 processors but with low-code software, offering better performance and reliability. In addition to developing rack-scale computer hardware, Oxide is creating a complete software package to ensure its immediate functionality. The company aims to utilize existing open-source components whenever feasible and develop its solutions as needed. By closely integrating all firmware, software, and even the hypervisor with the underlying hardware, Oxide seeks to provide a comprehensive and seamless software stack. As introduced by Oxide Computer, their services combine networking, computing and storage capabilities into a single, plug-and-play box. Its rack-scale design improves per-watt usage by 70% and energy efficiency as much as 35%, compared to traditional rack and servers.
Figure. Oxide Computer
As companies delve deeper into digital transformation and the increasing demand for cloud computing in generative AI, it is vital to strike the balance between performance and scalability, cost of computing, and data security. Oxide Computer which is adopting the on-premise IaaS is adding a new page on the cloud-computing business model landscape.
Table 1. Cloud Computing Business Model
Business Model | Description |
---|---|
Software-as-a-Service (SaaS) | Software applications are delivered over the internet on a subscription basis. In this model, the software is centrally hosted by a provider and accessible to users through a web browser or a thin client interface. Users do not need to worry about managing the underlying infrastructure, as the provider takes care of maintenance, security, and updates. Examples of SaaS include web-based email services, customer relationship management (CRM) software, and collaboration tools like Google Workspace or Microsoft Office 365. |
Platform-as-a-Service (PaaS) | PaaS provides a platform and environment for developers to build, deploy, and manage applications without the complexity of infrastructure management. It offers a framework that includes operating systems, development tools, databases, and other resources needed for application development. With PaaS, developers can focus on coding and application logic while the provider handles the underlying infrastructure and scalability. PaaS allows for easier development, testing, and deployment of applications. Examples of PaaS include Microsoft Azure App Service, Google App Engine, and Heroku. |
Infrastructure-as-a-Service (IaaS) | IaaS is a cloud computing model that provides virtualized computing resources over the internet. It offers virtual machines, storage, networking, and other infrastructure components as a service. Users have more control over the infrastructure, as they are responsible for managing the operating systems, applications, and data hosted on the infrastructure. IaaS allows for scalability and flexibility, enabling users to quickly scale resources up or down based on their needs. Examples of IaaS include Amazon Web Services (AWS) EC2, Microsoft Azure Virtual Machines, and Google Compute Engine. |
On-premise IaaS | In addition to IaaS, Oxide Computer is adding a new definition of IaaS where infrastructures are “owned” by users instead of “Rent”. All-in-one solutions, combined with networking, computing and storage with easy configuration in is opening a new era of cloud computing. |
The Benefits of on-premise Cloud Computing for Generative AI
With the growing demand in the generative AI market, GPT-based solution is disrupting many business operations. AI agents, for example, require high-performance computing for customized training, domain-specific understanding, and reasoning, with the integration of LangChian and LlamaIndex, these frameworks provide essential tools that facilitate data ingestion, structuring, retrieval, and integration with various application frameworks. As these frameworks rely on vector embedding and storage, data privacy has become a major concern. On-premises cloud solutions offer a compelling advantage in this regard. By leveraging an on-premises infrastructure, organizations can have greater control and confidence over the privacy and security of their data, reducing potential risks associated with data breaches or unauthorized access that cloud occurs in a public cloud environment.
On-premise cloud also brings several other advantages to the generative AI landscape. They offered improved latency and faster data processing, as the computational resources are physically closer to the AI agents and data sources. This allows for quicker training iterations, faster model interface, and reduced network latency, leading to enhanced performance and operational efficiency. In addition, companies can easily scale their computational resources based on the specific needs of their generative AI workloads, without relying on the limitations or pricing models of external cloud providers. On-premise cloud brings seamless integration with existing infrastructure and workflows.
Table 2. Advantages of on-premise Cloud
Advantage | Description |
---|---|
Data privacy and security | By keeping the data on-premises, organizations can maintain control over sensitive or proprietary information, ensuring data privacy and complying with regulatory requirements. On-premises cloud computing allows for tighter security measures, reducing the risk of data breaches or unauthorized access to the generative AI models and the data used for training. |
Reduced latency | By deploying the models on-premises, organizations can reduce network latency and achieve faster response times. This is particularly important for real-time applications where immediate feedback or generation is required. |
Customization and optimization | They can select hardware configurations, networking setups, and storage systems that align with the unique requirements of generative AI. This customization allows for efficient utilization of resources and can result in better performance and cost-effectiveness. |
Compliance and Regulatory | Healthcare or finance, for example, have strict compliance and regulatory frameworks that govern the storage and processing of data. By utilizing on-premises cloud computing for generative AI, organizations can ensure compliance with these requirements, as they have more control over the infrastructure and data handling processes. |
Company Analysis
Cloud computing continues to evolve, enabling greater accessibility and cost-effectiveness for cloud-based applications. As Oxide Computer Company pioneers the next level of on-premises cloud-based systems, we also notice the ongoing advancements in the cloud computing sector.
Listed Companies
Intel
Led by Eclipse and featuring participation from Intel Capital and other ventures, $44 million was funded to Oxide Computer. This funding will be used to accelerate production for Fortune 1000 enterprises. Oxide aims to solve the challenge of on-premises infrastructure by delivering a unified product that brings the developer experience and operational efficiencies of the public cloud to on-premises environments. The company's Cloud Computer offers purpose-built hardware and software for hyperscale cloud computing in on-premises data centers, providing improved energy efficiency, space utilization, and rapid deployment. Oxide enables developers to build, run, and operate applications with enhanced security, latency, and control while empowering enterprises to up-level IT operations. The funding will support Oxide's mission to redefine the economics of cloud ownership.
Oracle
In October 2023, Oracle plans to establish cloud computing infrastructure in Rwanda by June 2024, aligning with the country's goal of becoming a regional tech hub. The partnership aims to provide cloud services to startups, governments, and businesses, fostering an innovation-friendly environment. The initial phase involves constructing a data center in Kigali Innovation City, with the facility expected to launch in June 2024. This move strengthens Rwanda's digital infrastructure and supports its ambitions in the technology sector. By embracing the cloud's capabilities and leveraging Oracle's support, Rwanda can accelerate its technological progress and emerge as a regional leader in innovation.
Microsoft
In October 2023, Microsoft has recently introduced Radius, a cloud-native application platform aimed at enhancing collaboration between developers and platform engineers involved in the delivery and management of cloud-native applications. This platform ensures that these applications align with corporate standards for cost-efficiency, operational efficiency, and security by default.
Radius is an open-source project developed by Microsoft's Azure Incubation team. It enables the deployment of applications across private cloud, Microsoft Azure, and Amazon Web Services. Radius is designed to accommodate the specific requirements of developers by supporting established technologies like Kubernetes, existing infrastructure tools such as Terraform and Bicep, and seamless integration with current continuous integration and continuous delivery (CI/CD) systems like GitHub Actions.
Startup Companies
In 2023, cloud companies have been dedicated to helping customers optimize their spending by addressing issues such as redundant and underutilized cloud infrastructure. They are also striving to enhance resiliency and introduce next-generation cloud observability solutions to meet evolving customer needs. Startups like CoreWeave, specializing in GPU-focused solutions, are among those driving innovation in 2023. Additionally, Chronosphere and Yotascale, experts in cloud management, are capitalizing on the expanding use cases surrounding artificial intelligence and machine learning.
CoreWeave
CoreWeave, a GPU-focused cloud computing provider in New Jersey, United States, has secured a $2.9 billion debt financing to address the increasing demand for server chips used in AI-powered software training and operations. The credit facility was led by existing investors Blackstone and Magnetar Capital, with participation from other firms. CoreWeave plans to utilize the loan to purchase hardware for existing client contracts, hire top talent, and expand its data center infrastructure to reach 14 data centers by the end of the year. The company initially focused on cryptocurrency applications before shifting to general-purpose computing and generative AI technologies, offers access to a range of Nvidia GPUs for various use cases. While competing with major cloud providers like Amazon Web Services, Microsoft Azure, and Google Cloud is challenging, CoreWeave aims to differentiate itself through its infrastructure supporting generative AI and additional offerings like its accelerator program.
Vultr
Vultr, a leading cloud computing platform, has launched the Vultr GPU Stack and Container Registry to support the development and deployment of AI models. These offerings provide instant provisioning of NVIDIA GPUs across Vultr's global data center locations, enabling faster AI and ML model development. The GPU Stack includes a pre-configured operating system and software environment with NVIDIA CUDA Toolkit and drivers, allowing for immediate deployment of popular AI models. The Container Registry integrates with the GPU stack, allowing organizations to source and deploy NVIDIA ML models to Kubernetes clusters across Vultr's cloud data centers.
Vultr has reached a significant milestone of surpassing $125 million in Annual Recurring Revenue (ARR). With its customer-centric approach, Vultr has enabled over 1.5 million users to deploy more than 50 million cloud compute instances globally. This achievement highlights Vultr's success in providing a straightforward and user-friendly platform for deploying various applications, including SaaS, mobile apps, websites, video streaming, and multiplayer games.
Runpod
In March 2023, Runpod has raised $2 million in the seed fund stage. RunPod's mission is to provide the core GPU computing. They aim to make GPU resources affordable and accessible to developers, startups, and enthusiasts. While big cloud providers like AWS, GCP, and Azure have made GPU access costly, RunPod decided to build its own GPU infrastructure to offer cheaper and more efficient options. They started with Community Cloud, a decentralized network of GPU hosts, and received significant support from the GPU cloud community. As the demand for GPU requirements increased, RunPod introduced Secure Cloud, which provides managed GPUs in reliable data centers with features like high data transfer speeds, redundancy, localized network volumes, and enhanced security at a significantly lower cost compared to big cloud providers. They also developed Serverless architecture, which abstracts away the complexity of scaling GPU usage for AI inference and training. Additionally, RunPod offers AI Endpoints that allow users to run advanced generative AI models with just a few lines of code. RunPod continues their efforts to make the platform seamless and accessible.
Conclusion
The ongoing digital transformation across industries, the increasing reliance on data-driven technologies, and the demand for scalable and flexible computing solutions are driving the growth of cloud computing.
While public cloud platforms have gained significant traction, there is also a promising outlook for on-premise cloud computing. Organizations with stringent data privacy requirements or specialized workloads, such as generative AI, can benefit from the control, customization, and security that on-premise cloud solutions offer. The market for on-premise cloud computing in generative AI is expected to grow as companies seek to balance the benefits of the cloud with their unique needs. Investing in on-premise cloud computing requires careful evaluation of infrastructure investments, market demand, competitive landscape, and integration capabilities. However, the differentiation potential, addressing specific industry compliance needs, and providing customized solutions can create opportunities for growth and market share.
In addition, the continued advancements in technology, such as hardware innovations and optimized software stacks, are likely to enhance the performance and cost-effectiveness of on-premise cloud computing. As the computing demand increases, many companies will seek the balance of all-in-one cloud solutions and the cost of computing. Especially in generative AI, where computing demand is surging, Oxide makes it possible to “own the cloud” instead of renting it. This concept would reshape the cloud computing business model and shift the economics of the generative AI landscape.
In conclusion, coupled with the increasing adoption of Generative AI, creates a favorable investment outlook for on-premise cloud computing in the long term.