Last week, we introduced Medusa Cloud, our managed services platform that provides scalable infrastructure tailor-made for Medusa applications. Developers connect their GitHub repository, and within minutes, they get a fully hosted Medusa setup with production and preview environments.
Building Medusa Cloud required a server framework to avoid building everything from scratch, including HTTP endpoints, event listeners, and services. Fortunately, this is exactly what we have been working on for the past few years. Medusa 2.0 ships with a framework designed to make building robust and scalable applications fast and fun. Naturally, when it came to building Medusa Cloud, we turned to our own tools.
This blog post is the first in a series exploring how we’re leveraging our modules and framework to build Medusa Cloud. The series demonstrates the versatility of our tools and aims to inspire others to create their own digital commerce experiences with Medusa.
Medusa Cloud
Medusa Cloud relies on a range of long-running operations to provision resources across multiple systems; databases, infrastructure providers, version control systems, and more. Orchestrating these operations reliably is incredibly hard because there are many moving parts, and those parts occasionally crash. Your application needs to ensure data consistency across systems while being resilient to network outages.
We knew of this complexity going into the project because it is similar to building enterprise-grade commerce applications. Something we have been doing for a long time now. The lessons from building these large-scale applications eventually led to the creation of our framework, and in particular, Workflows, a tool purpose-built to tame the complexity of cross-system operations.
Let’s look at how we use Workflows in Medusa Cloud.
Provisioning infrastructure with Workflows
When you sign up for Medusa Cloud, the first step is creating a project and connecting a GitHub repository. Behind the scenes, this triggers a series of operations across several independent systems.
1234567891011121314151617181920212223242526272829303132333435363738export const createProjectWorkflow = createWorkflow("create-project-workflow",(input) => {// 1. Check user has access to repositoryconst repository = validateRepositoryStep(input)// 2. Determine Dockerfile from package managerconst dockerfile = selectDockerfileStep(input)// 3. Create projectconst project = createProjectStep({...data.input,dockerfile: data.dockerfile})// 4. Create Neon project and set up build pipelineparallelize(upsertNeonProject(),upsertBuildPipeline())// 5. Create production and preview environmentsparallelize(createProjectEnvironmentStep({ type: "prod", ... })createProjectEnvironmentStep({ type: "preview", ... }))// 6. Mark project as readyupdateProjectsStep([{ id: project.id, status: "ready" }])// 7. Emit project ready eventemitEventStep({ eventName: ProjectWorkflowEvents.READY })// 8. Send success notificationsuccessNotificationStep()return new WorkflowResponse(project)})
The workflow performs the following tasks:
- Validate the user’s GitHub access
- Determine the Dockerfile based on the package manager in the repository
- Create a project entry in Medusa’s database
- Provision infrastructure
- Set up a build pipeline
- Create Neon project for hosting the database
- Create project environments in Medusa’s database
- Mark project as ready
- Emit an event indicating the success of the operation
- Send a notification indicating the success of the operation
This process involves multiple systems and asynchronous tasks, some of which can take several minutes to complete. Workflows and its durable execution engine excel at dealing with such complexity. The state of workflows is persisted in a data store, allowing long-running tasks to finish in the background while subsequent steps execute as if the entire operation was synchronous.
Dealing with failures
Now, consider a scenario where the task to upsert a build pipeline (step 4) fails, because the downstream provider is experiencing an outage. In parallel with this failing operation, we've successfully created a Neon project. Without a mechanism for cleaning up in the face of such an error, your data ends up in an inconsistent state, and unused resources accumulate over time. Fortunately, Workflows’ durable execution engine also takes care of this for us with its built-in retries and rollback mechanism.
In the event of a failure, the workflow is retried based on its configuration. If all retries fail, compensating actions are triggered for all the steps that have run up until the point of failure, ensuring a clean slate.
In this specific scenario, the rollback actions include:
- Clean up provisioned infrastructure
- Delete Neon project
- Emit event indicating failure
- Mark the project with a failure status
Workflows' durability and built-in retries make it a powerful tool for reliably provisioning infrastructure in Medusa Cloud. Without Workflows, managing such operations would have been significantly more challenging and require your team to build tools similar to our retry-mechanism and rollback functionality, to mention a few.
You can read more about Long-running Workflows in our documentation.
From Commerce to Cloud
Workflows was originally built to orchestrate operations in large-scale commerce applications that comprise multiple systems–particularly prominent in enterprise setups.
In Medusa 2.0, we redesigned our core architecture, and each module is now treated as an independent system. Workflows tie the modules together to enable cross-module operations, similar to how it's used to orchestrate cross-system operations in Medusa Cloud.
For example, adding an item to a cart involves:
- Retrieving the product variant from the Product module
- Calculating the price with the Pricing module
- Generating and adding the item through the Cart module
The same principles that drive commerce operations in Medusa’s core now also power the complex operations of Medusa Cloud.
Conclusion
Medusa Cloud exemplifies how our framework and tools, like Workflows, allow small engineering teams to build scalable applications, involving complex operations, fast. In future posts, we'll continue to explore how other primitives in Medusa's core offering are powering Medusa Cloud.
If you are curious to know more about Medusa Cloud, you can sign up and book a demo here.