Resolved: Configure cloud-based vscode ide on AWS


CONTEXT: We have a platform where users can create their own projects – multiple projects per user. We need to provide them with a browser-based IDE to edit those projects. We decided to go with coder-server. For this we need to configure an auto-scalable cluster on AWS. When the user clicks “Edit Project” we will bring up a new container each time.
QUESTION: How to pass parameters from the url query ( into a startup script to pre-configure the workspace in a docker container when it starts?
Let’s say the stack is AWS + ECS + Fargate. We could use kubernetes instead of ECS if it helps.
I don’t have any experience in cluster configuration. Will appreciate any help or at least a direction where to dig further.


The above can be achieved using multiple ways in AWS ECS. The basic requirements for such systems are to launch and terminate containers on the fly while persisting the changes in the files. (I will focus on launching the containers)

Using AWS SDK’s:

The task can be easily achieved using AWS SDKs, Using a base task definition. AWS SDK allows starting tasks with overrides on the base task definition.
E.G. If task definition has a memory of 2GB then the SDK can override the memory to parameterised value while launching a task from task def.
Refer to the boto3 (AWS SDK for Python) docs.

Overall Solution

Now that we know how to run custom tasks with python SDK (on demand). The overall flow for your application is your API calling AWS lambda function whit parameters to spin up and wait to keep checking task status and update and rout traffic to it once the status is healthy.
  1. API calls AWS lambda functions with parameters
  2. Lambda function using AWS SDK create a new task with overrides from base task definition. (assuming the base task definition already exists)
  3. Keep checking the status of the new task in the same function call and set a flag in your database for your front end to be able to react to it.
  4. Once the status is healthy you can add a rule in the application load balancer using AWS SDK to route traffic to the IP without exposing the IP address to the end client (AWS application load balancer can get expensive, I’ll advise using Nginx or HAProxy on ec2 to manage dynamic routing)


Ensure your Image is lightweight, and the startup times are less than 15 mins as lambda cannot execute beyond that. If that’s the case create a microservice for launching ad-hoc containers and hosting them on EC2

Using Terraform:

If you looking for infrastructure provisioning terraform is the way to go. It has a learning curve so recommend it as a secondary option.
Terraform is popular for parametrising using variables and it can be plugged in easily as a backend for an API. The flow of your application still remains the same from step 1, but instead of AWS Lambda API will be calling your ad-hoc container microservice, which in turn calls terraform script and passing variables to it.

Refer to the Terrafrom docs for AWS

If you have better answer, please add a comment about this, thank you!