Configuring Worker Services with flightcontrol.json
In addition to the Service Configuration attributes that are common to all services, the following attributes are specific to worker services.
Worker Service Attributes
The type for all web server services is worker
, and should be specified like the following:
type: 'worker'
In addition, there are several other attributes that are specific to worker services:
Target
target.type: 'fargate' | 'ecs-ec2'
- Example:
"target": {"type": "fargate"}
- Defaults to
fargate
- Fargate is a fully managed, pay-as-you-go compute engine that lets you focus on building apps without managing servers. It’s the easiest to use. The tradeoffs are that it does not support GPUs or custom instance types, and can be up to 1.5 times more expensive than ECS+EC2, depending on how well you optimize the EC2 compute.
- ECS+EC2 is AWS’s version of Kubernetes. It gives you advance control over the cluster instance type and supports GPUs. It enables leveraging Reserved Instances for 50% or more savings. The tradeoffs are that 1) you have to manage the instance size to ensure there is enough CPU and memory for your app, 2) autoscaling is up to 2x slower if you don’t have empty EC2 instances on standby, and 3) you have to manage the config options to optimize compute across instances to minimize wasted resources.
If target is fargate
: no other fields
If target is ecs-ec2
:
target.clusterInstanceSize: string
- Example:
"clusterInstanceSize": "t3.medium"
- Supported values: all non-ARM based instance sizes. See all instance options.
target.clusterMinInstances: number
- Example:
"clusterMinInstances": 1
- Minimum
1
- Fastest deploys are possible when this is at least 2x the number of worker instances running
- Faster autoscaling is enabled when this is higher than minimum worker instances so that machines are already running
target.clusterMaxInstances: number
- Example:
"clusterMaxInstances": 5
- Minimum same as
clusterMinInstances
- Must be high enough to accommodate 2x your app max instances count, otherwise deploys may fail
Worker CPU
cpu: number
- Example:
"cpu": 0.25
- This is the AWS
vCPU
unit for each service instance. It does not correspond to number of cores. It’s an abstract unit of CPU power defined by Amazon - Supported values:
- If target
fargate
:0.25, 0.5, 1, 2, 4, 8, 16
- For more details on Fargate configuration, see AWS’s Fargate pricing page
- If target
ecs-ec2
: range:0.125
to10
or the vCPU ofclusterInstanceSize
, whichever is less
- If target
Worker Memory
memory: number
- Example:
"memory": 1
- In gigabytes
- Supported values:
- If target
fargate
:- With
cpu: 0.25
-0.5, 1, 2
Withcpu: 0.5
-1...4
(intervals of 1) - With
cpu: 1
-2...8
(intervals of 1) - With
cpu: 2
-4...16
(intervals of 1) - With
cpu: 4
-8...30
(intervals of 1) - With
cpu: 8
-16...60
(intervals of 4) - With
cpu: 16
-32...120
(intervals of 8) - For more details on Fargate configuration, see AWS’s Fargate pricing page
- With
- If target
ecs-ec2
: range:0.125
to0.25
less than the memory ofclusterInstanceSize
. (0.25
GB is reserved for the ECS agent)
- If target
Worker GPU
gpu: integer
- Example:
"gpu": 1
- This is how many GPUs to expose to your container
- Supported values:
- If target
fargate
:0
orundefined
- If target
ecs-ec2
: Range0
to the GPU ofclusterInstanceSize
(requires a GPU compatible instance size)
- If target
Storage
storage: int
- Example:
"storage": 20
- Support values:
20-200
- This is the ephemeral storage available to all containers running on your task. For example, this storage will be shared between your main application container and the sidecar containers if you are using any (for example, for Datadog).
- By default, AWS sets this value to 20GB.
- For more details, see AWS’s Fargate task storage page
Number of Instances
minInstances: int
- Example:
"minInstances": 1
- Support values:
1+
- Optional with default:
1
- The minimum number of instances you want to be running. A min of 2 means there will be two containers running the same copy of code.
maxInstances: int
- Example:
"maxInstances": 2
- Support values:
1+
- Optional with default:
1
- The maximum number of instances you want to be running. ECS will autoscale the number of instances based on your traffic up to this maximum. This number effectively sets a limit on AWS cost you may incur.
Autoscaling
autoscaling: object
- Example:
"autoscaling": {
"cpuThreshold": 60,
"memoryThreshold": 60,
"cooldownTimerSecs": 300,
"requestsPerTarget": 1000
},
- Optional
- Enables autoscaling for your service. For more, see our autoscaling guide.
For each service’s autoscaling configuration, you can configure the following attributes:
- cpuThreshold - The CPU threshold at which to scale up or down. For example,
60
would mean that if the average CPU usage across all instances is greater than 60%, then we will scale up. If it is less than 60%, then we will scale down. - memoryThreshold - The memory threshold at which to scale up or down. For example,
60
would mean that if the average memory usage across all instances is greater than 60%, then we will scale up. If it is less than 60%, then we will scale down. - cooldownTimerSecs - The cooldown timer in seconds. This is the amount of time to wait after scaling up or down before scaling again. For example,
300
would mean that we will wait 5 minutes after scaling up or down before scaling again. - requestsPerTarget - The number of requests per target. This is the number of requests per minute that each instance can handle. For example,
1000
would mean that each instance can handle 1000 requests per minute.
Container Insights
containerInsights: boolean
- Example:
"containerInsights": true
- Optional with default:
false
- Enables AWS Container Insights for your service. This will send metrics to CloudWatch for your service. For more, see our Container Insights guide.
Container Image
containerImage: object
-
When using
"buildType": "fromService"
fromService: string
- Example:
"containerImage": {"fromService": "service-id"}
- The ID of the service that will be used as the source for the container image. The service specified here needs to be built by Flightcontrol.
- Example:
-
When using
"buildType": "fromRepository"
registryId: string
- Example:
"registryId": "ecr-9l03731"
- Registry ID, you can find this on the Registries page in our dashboard
repository: string
- Example:
"repository": "node:18-slim"
- This is the URI of the image repository you wish to access
tag?: string
- Example:
"tag": "latest"
- Optional
- This is the tag of the image from the repository that you would like to use
- Example:
Runtime-only Environment variables
includeEnvVariablesInBuild: boolean
- Example:
"includeEnvVariablesInBuild": false
- Optional with default:
true
- Enables runtime-only environment variables - see the Configuring Environment Variables page for more details.
Docker Labels
dockerLabels: Record<string, string>
- Example:
"dockerLabels": {"com.example.vendor": "ACME"}
- Optional
- Will apply the set labels to the container
Version History Count
versionHistoryCount: number
- Example:
"versionHistoryCount": 15
- Optional with default:
10
- How many previous container images to keep in ECR. This determines how far back you can rollback. ECR storage is $0.10 per GB / month, so this configuration is balance between cost and rollback history.
Integrations
integrations: object
Under the integrations
key, you can configure integrations with third-party services. At this time, the only supported integration is with Sentry.
Upload Sentry Source Maps
uploadSentrySourceMap: boolean
- Example:
"integrations": { "uploadSentrySourceMap": true }
- Optional with default:
false
- Enables uploading source maps to Sentry. This is useful for debugging errors in production. For more, see our Sentry guide.
Sidecars
sidecars: array
- Example:
"sidecars": [
{
"name": "open-telemetry-collector",
"image": "otel/opentelemetry-collector-contrib:0.83.0",
"cpuAllotment": 0.1,
"memoryAllotment": 0.25,
"enableNetworking": true,
"ports": [4318]
}
]
- Optional with default:
[]
- Enables sidecars for your service. For more, see our sidecars guide.
For each individual sidecar, you can configure the following attributes:
- name - The name of the sidecar container.
- image - The URL to the image for the sidecar container.
- cpuAllotment - The absolute amount of CPU to allocate to the sidecar container. In
vCPU
units. For example,0.25
would be 1/4 of a vCPU. - memoryAllotment - The absolute amount of memory to allocate to the sidecar container. In
GB
units. For example,0.5
would be 1/2 of a GB. - enableNetworking - Whether to enable networking for the sidecar container. Defaults to
true
. - ports - An array of ports to expose from the sidecar container. Defaults to
[]
. - envVariables - An object of environment variables to set in the sidecar container. Defaults to
{}
. You can use the same rules for environment variables as you would for your main container. - dockerLabels - An object of Docker labels to set in the sidecar container. Defaults to
{}
.
Advanced Logging
See the Logging page for provider templates.
logging: object
-
Optional
-
Adds advancing logging options for ECS, such as outputting logs to a third-party service
ecsLogsMetadataEnabled: boolean
- Optional with default:
false
- Adds the ECS cluster, task ARN, and task definition to the stdout/stderr container logs
cloudwatchLogsRetentionDays: integer
- Optional with default:
1
- Must be 1 or more
- Configures the number of days to retain logs in CloudWatch
envVariables: EnvVariables
- Optional
- Environment variables to be set for the logging container
- Can use plain text, from Parameter Store, or from Secrets Manager
firelensOptions: FirelensOptions[]
-
Optional
-
These are the output plugins for Firelens
-
Cloudwatch is always enabled
FirelensOptions: object
name: string
- Required
- Example:
datadog
- This is the name of the output plugin
match: string
- Required
- Example:
*
- This is the pattern to match from the log stream
options: Record<string, string | number | string[]>
- Required
- Example:
{"api_key": "1234", "headers": ["Authorization Bearer $SOURCE_TOKEN", "Content-Type application/json"]}
- These are the options for the output plugin which will be dependent on the plugin
- Optional with default:
Example:
{
"logging":
{
"ecsLogsMetadataEnabled": true,
"cloudwatchLogsRetentionDays": 7,
"envVariables":
{
"key": "value"
},
"firelensOptions": [
{
"name": "datadog",
"match": "*",
"options":
{
"api_key": "1234",
"dd_tags: project:fluentbit"
}
}
]
}
}
Extra options for Nixpacks & Legacy Node.js
basePath?: string
(only supported when buildType: nixpacks
)
- Allows you to specify in which folder the commands should run
- Example:
"basePath": "./apps"
- Optional, defaults to ”./”
installCommand: string
- Example:
"installCommand": "./install.sh"
- Optional, intelligent default based on your language and framework detected at the
basePath
- What we use to install dependencies for your build
buildCommand: string
- Example:
"buildCommand": "blitz build"
- Optional, intelligent default based on your language and framework detected at the
basePath
- What we use to build your app
postBuildCommand: string
- Example:
"postBuildCommand": "./postBuildCommand.sh"
- Optional, Empty by Default
- Used as a build hook to run any operation after your build is complete
preDeployCommand: Array<string>
- Example:
"preDeployCommand": ["bundle", "exec", "rails", "db:prepare"],
- Optional
- A command that runs after successful build and before starting the deploy (more information).
- If configured, a dedicated container is started to run the command and shuts down on completion.
- The command must be split into array parts because this is used to override the Docker CMD, and if passed as a single string
runc
counts it as a single command instead of a command + arguments. - Note: using this for database migrations will add 2-3 minutes to your deploy time because of the time it takes this temporary container to boot and run.
startCommand: string
- Example:
"startCommand": "blitz start"
- Optional, intelligent default based on your language and framework detected at the
basePath
- What we use to start your app
postDeployCommand: Array<string>
- Example:
"postDeployCommand": ["node", "script.js"]
- Optional
- A command that runs after successful deploy (more information).
- If configured, a dedicated container is started to run the command and shuts down on completion.
- The command must be split into array parts because this is used to override the Docker CMD, and if passed as a single string
runc
counts it as a single command instead of a command + arguments.
Extra options for custom Dockerfile only
dockerfilePath: string
- Example:
"dockerfilePath": "packages/web/Dockerfile"
- Relative path to the Dockerfile from your repo root
- It’s recommended to use
ENTRYPOINT
instead ofCMD
for your start command - You can authenticate with Docker Hub by adding your Docker Hub credentials as
DOCKER_USERNAME
andDOCKER_PASSWORD
environment variables. If these env variables are present, we’ll rundocker login
with them. This will prevent Docker Hub rate limit issues.
dockerContext: string
- Example:
"dockerContext": "packages/web"
- Optional with default:
"."
(repo root) - Relative path to the docker context from the repo root
- It’s recommended to use
ENTRYPOINT
instead ofCMD
for your start command
injectEnvVariablesInDockerfile: boolean
- Example:
"injectEnvVariablesInDockerfile": false
- Optional with default:
true
- Whether to inject environment variables automatically into Dockerfile or not
- It’s recommended to use Docker build secrets to control how environment variable are used during build, check the guide here