PortMapping

Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition.

If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort . The hostPort can be left blank or it must be the same value as the containerPort .

Most fields of this parameter ( containerPort , hostPort , protocol ) maps to PortBindings in the Create a container section of the Docker Remote API and the --publish option to docker run . If the network mode of a task definition is set to host , host ports must either be undefined or match the container port in the port mapping.

You can't expose the same container port for multiple protocols. If you attempt this, an error is returned.

After a task reaches the RUNNING status, manual and automatic host and container port assignments are visible in the networkBindings section of DescribeTasks API responses.

The application protocol that's used for the port mapping. This parameter only applies to Service Connect. We recommend that you set this parameter to be consistent with the protocol that your application uses. If you set this parameter, Amazon ECS adds protocol-specific connection handling to the Service Connect proxy. If you set this parameter, Amazon ECS adds protocol-specific telemetry in the Amazon ECS console and CloudWatch.

If you don't set a value for this parameter, then TCP is used. However, Amazon ECS doesn't add protocol-specific telemetry for TCP.

appProtocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.

Tasks that run in a namespace can use short names to connect to services in the namespace. Tasks can connect to services across all of the clusters in the namespace. Tasks connect through a managed proxy container that collects logs and metrics for increased visibility. Only the tasks that Amazon ECS services create are supported with Service Connect. For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .

Type: String

Valid Values: http | http2 | grpc

Required: No

The port number on the container that's bound to the user-specified or automatically assigned host port.

If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort .

If you use containers in a task with the bridge network mode and you specify a container port and not a host port, your container automatically receives a host port in the ephemeral port range. For more information, see hostPort . Port mappings that are automatically assigned in this way do not count toward the 100 reserved ports limit of a container instance.

Type: Integer

The port number range on the container that's bound to the dynamically mapped host port range.

The following rules apply when you specify a containerPortRange :

You must use either the bridge network mode or the awsvpc network mode.

This parameter is available for both the EC2 and AWS Fargate launch types.

This parameter is available for both the Linux and Windows operating systems.

The container instance must have at least version 1.67.0 of the container agent and at least version 1.67.0-1 of the ecs-init package

You can specify a maximum of 100 port ranges per container.

You do not specify a hostPortRange . The value of the hostPortRange is set as follows:

For containers in a task with the awsvpc network mode, the hostPortRange is set to the same value as the containerPortRange . This is a static mapping strategy.

For containers in a task with the bridge network mode, the Amazon ECS agent finds open host ports from the default ephemeral range and passes it to docker to bind them to the container ports.

The containerPortRange valid values are between 1 and 65535.

A port can only be included in one port mapping per container.

You cannot specify overlapping port ranges.

The first port in the range must be less than last port in the range.

Docker recommends that you turn off the docker-proxy in the Docker daemon config file when you have a large number of ports.

For more information, see Issue #11185 on the Github website.

For information about how to turn off the docker-proxy in the Docker daemon config file, see Docker daemon in the Amazon ECS Developer Guide .

You can call DescribeTasks to view the hostPortRange which are the host ports that are bound to the container ports.

The port number on the container instance to reserve for your container.

If you specify a containerPortRange , leave this field empty and the value of the hostPort is set as follows:

For containers in a task with the awsvpc network mode, the hostPort is set to the same value as the containerPort . This is a static mapping strategy.

For containers in a task with the bridge network mode, the Amazon ECS agent finds open ports on the host and automatically binds them to the container ports. This is a dynamic mapping strategy.

If you use containers in a task with the awsvpc or host network mode, the hostPort can either be left blank or set to the same value as the containerPort .

If you use containers in a task with the bridge network mode, you can specify a non-reserved host port for your container port mapping, or you can omit the hostPort (or set it to 0 ) while specifying a containerPort and your container automatically receives a port in the ephemeral port range for your container instance operating system and Docker version.

The default ephemeral port range for Docker version 1.6.0 and later is listed on the instance under /proc/sys/net/ipv4/ip_local_port_range . If this kernel parameter is unavailable, the default ephemeral port range from 49153 through 65535 (Linux) or 49152 through 65535 (Windows) is used. Do not attempt to specify a host port in the ephemeral port range as these are reserved for automatic assignment. In general, ports below 32768 are outside of the ephemeral port range.

The default reserved ports are 22 for SSH, the Docker ports 2375 and 2376, and the Amazon ECS container agent ports 51678-51680. Any host port that was previously specified in a running task is also reserved while the task is running. That is, after a task stops, the host port is released. The current reserved ports are displayed in the remainingResources of DescribeContainerInstances output. A container instance can have up to 100 reserved ports at a time. This number includes the default reserved ports. Automatically assigned ports aren't included in the 100 reserved ports quota.

The name that's used for the port mapping. This parameter only applies to Service Connect. This parameter is the name that you use in the serviceConnectConfiguration of a service. The name can include up to 64 characters. The characters can include lowercase letters, numbers, underscores (_), and hyphens (-). The name can't start with a hyphen.

For more information, see Service Connect in the Amazon Elastic Container Service Developer Guide .

The protocol used for the port mapping. Valid values are tcp and udp . The default is tcp . protocol is immutable in a Service Connect service. Updating this field requires a service deletion and redeployment.

Valid Values: tcp | udp

For more information about using this API in one of the language-specific AWS SDKs, see the following:

AWS SDK for C++

AWS SDK for Go

AWS SDK for Java V2

AWS SDK for Ruby V3

Warning

To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions.

Thanks for letting us know we're doing a good job!

If you've got a moment, please tell us what we did right so we can do more of it.

Thanks for letting us know this page needs work. We're sorry we let you down.

If you've got a moment, please tell us how we can make the documentation better.

DEV Community

DEV Community

Ryan Dsouza

Posted on Sep 15, 2019 • Updated on Sep 20, 2019

Deploy a Node app to AWS ECS with Dynamic Port mapping

Note: There are a couple of pre-requisites required for this to work.

  • AWS CLI to push your docker app to the AWS repository. Install it and setup your credentials using the aws configure command.
  • Docker Community Edition for building your app image.
  • I have used Node so node and npm is required, but you can use any backend of your choice like Python or Go and build your Docker image accordingly.

I personally love Docker. It's a beautiful way to deploy your app to production. And the best part being you can test your production app in the same environment on your local machine as well!

This picture sums it all up :)

The birth of Docker

Today I will show you how to deploy your Node app bundled in a Docker image via AWS ECS (Elastic Container Service) .

Note: I recommend that you try this on a paid AWS account that you are currently using in production or in your work environment. But if you are on a free-tier, please just read this tutorial as you go because creating these services will cost you money!!!

Now that I have warned you, let's login into the AWS console and select ECS.

Select ECS from the AWS service list

This will take you to the following page. Do watch the introductory video, it's awesome!

The AWS ECS home page

We are now interested in the list on the left. First of all, we need to create a repository. A repository in AWS is similar to the one in Docker Hub where we have all sorts of images like MongoDB, Node, Python etc. with their specific versions. But here, we will build a custom Docker image of our Node app.

Click on Repositories and it will take you the the ECR (Elastic Container Registry page) where you can store all your custom Docker images.

Click on Create repository at the top right and you will then get this page.

Create a repository in ECR

In the input, add a name of your choice and then click on Create repository . Now you have a repository of your own and you can push your Docker image containing your app to this repository. I have created a repository and named it node-simple .

My node app repository on ECR

Notice the URI field. That's an important field and we will require it when we push our Docker image to ECR from our local machine.

Click on the repository and it will take you to the images list. Here you can view your app image that we will push to docker soon.

Now let's move on to creating our simple Node app.

Create a new folder, open that folder in your terminal and then run npm init -y to create a package.json file. Then create a file named index.js and add the following contents to it.

We have spun a simple express server with a / GET route that returns some json.

Now run npm i express to install the express package.

Lastly, add a start script in the scripts field of your package.json file.

Now, run npm start in your terminal to see the app running on http://localhost:3000/ by default if you have not specified a PORT in your environment. You will see the json message API is functional returned in the browser.

Let's move on to creating our Dockerfile . This is essential for building our image and pushing it to ECR. Create a file named Dockerfile in our folder and add the following content.

We are using alpine-node for a smaller image size. After setting our working directory to /app in the Docker image, we are copying our package.json as well as package-lock.json files for deterministic builds. Then we run the npm ci command to ensure the same package versions are installed as in our lockfile. We then copy the index.js file over to our image and lastly, we add our start command as the main command to be run in our image.

Go back to the AWS console and click on the repository you have created. You will find a button on the right named View push commands .

Your node app repository

Click that and you will get a list of commands to be run on your machine to push the image to AWS ECR in the following manner.

Commands for pushing your app to ECR

Copy the commands and run them one by one in your node app folder. I'm in the us-west-2 region but you can use any region that supports ECS (which are mostly all of them btw).

These commands, when run in order

  • Logs you into the AWS service with the credentials you have provided.
  • Builds your app into a Docker image.
  • Tags your app with respect to the repository you have created.
  • Pushes your image to the repository.

After successfully completing the above steps, you will be able to see your Docker image in your repository like this.

The successfully pushed Docker image in your repository

This was creating your image. Now let's move on to creating a cluster for our app.

Select Clusters under Amazon ECS and you will be redirected to the clusters list where we don't have any clusters right now. Let's click on the Create Cluster button and then select the EC2 Linux + Networking template and click on Next step .

In this section, give a name to your cluster and in the Instance Configuration section, select the following values.

Instance configuration for your cluster

Note: You need to select a Key Pair if you want to SSH into your instances. It's useful for debugging purposes.

Leave the other options as is, it will create a VPC for you and assign your EC2 instances with IAM role as well so that ECS can connect to your instances and run your docker images.

You will see something like this. I have named my cluster node-simple .

Cluster creation in progress

After it's completed, click on View cluster and it will take you to your create cluster page and it's status will be shown as Active .

You can go to EC2 from your AWS services and you will be able to see that two t2.micro instances have been created. You can SSH into them as well with the public IP of those instances.

EC2 instances created by our ECS cluster

Go back to ECS, and on the left, you will see something called Task Definitions . Click that and you will be taken to a page where you can create a task definition for your cluster.

Task definitions page under Amazon ECS

In simple terms, a task definition is a connection between your ECS cluster and the Docker image residing in ECR. Currently we do not have any task definition so let's create one.

Click on Create new Task Definition and you will be given two options, Fargate and EC2 . Select EC2 and proceed to the Next step.

Enter a name for your task definition, leave everything as default until you come to this section.

The Elastic Inference section in Task definition creation

This section helps you specify all the necessary values that your Docker image requires. Click on Add Container and you will see something like this.

Adding a container to your Task Definition

Give a name to your container and in the Image field, copy the URI of the Docker image that you had pushed to ECR and paste it here.

In the port mappings field, add 80 as the Container port and 0 as the Host port . Now you must be thinking that why are we passing 0 as the Host port?

It's because we need our EC2 instance to have dynamic ports to be mapped with the PORT 80 of our Docker container so that multiple containers can be run on the same EC2 instance. 0 means any random port from 32768 to 65535 will be assigned to the EC2 instance. These are also known as Ephemeral Ports .

Also, we have specified PORT 80 for our Docker container, so we have to tell our Node server to run on 80 somehow. How could we achieve that... You're right, using Environment Variables !

Scroll below and you will find the Environnment section. Add your environment variable in the following manner.

Specify the PORT 80 in the environment section

Node will read this PORT using the process.env.PORT variable we have specified in our code.

Leave everything as is and click on Add . You will see your container added along with the ECR image URI that you have passed. Leave the rest of the fields as they are and click on Create . You will be redirected to the task definition page and you will see the task definition along with it's version and all the options we had provided in the previous section.

Now let's add a load balancer that will balance the traffic between our two EC2 instances.

Go to the EC2 service and select Load Balancers from the left section under LOAD BALANCING . It will take you to the Load balancers listing. Right now, we don't have any. So let's create one.

Click on Create Load Balancer and you will get an option to select the load balancer type. Select Application Load Balancer (ALB) as it is highly advanced and supports dynamic mapping of ports in our EC2 instances.

After clicking on Create you will be presented with the load balancer configuration. Give your ALB a name, and leave everything as it is except the VPC. Select the VPC the ECS cluster had created for you instead of the default else the ALB will not work properly. Check all the Availability Zones as our instances will be spinned off in all of those for High Availability.

Configure the basic settings of your Load Balancer

Click Next . You will get a warning that we are using an insecure listener i.e. PORT 80. In production, use an SSL certificate and configure your ALB to listen on 443 (HTTPS) as well. For now, let's ignore this warning and click Next .

Here, you have to configure a Security Group (SG) for your ALB. Let's create a new SG and open the HTTP port 80 to the world as the users will be using the ALB route for accessing our Node API. Add the HTTP rule for our ALB.

Open Port 80 of the Load Balancer for our users

Click Next . This is an important part. Here, we need to create a target group to specify the health check route and the PORT the ALB will be routing traffic on to our EC2 instances.

Create a Target Group for our Load Balancer

Leave everything as is and click Next . You will be taken to the Register Targets page to register our instances in our Target Group we created in the previous page.

Do not register any targets here, as that will be done automatically in the final step when we are creating our service.

Click Next , review the parameters that you have added and then click on Create . This will create the load balancer and give it a DNS which we can call our Node API from.

The created load balancer with its DNS endpoint

Next, we need the EC2 instances to communicate with the ALB so that it can perform health checks and route the traffic to our EC2 instances. For this, we need to add a rule in our EC2 security group.

Click on Security Groups in the left menu under NETWORK & SECURITY . You will find two security groups. One for the EC2 instances and one for the Load Balancer. Click on the EC2 security group which was created by our cluster.

The EC2 and Load balancer security groups

A menu will open below. Select the Inbound tab and click on Edit . This will open a dialog box for editing our security rules. We will delete the rule in place and add our own. Select Custom TCP rule from the dropdown and in the port range add 32768-65535 as our port range. In the source, type sg and you will get a dropdown of the security groups present. Select the load balancer SG and add a description of your choice.

The rule will look something like this.

The inbound rule for our EC2 instances

Note: Also add the SSH port 22 rule if you want to SSH into the EC2 instance.

Click on Save . This completes the Load Balancer setup and takes us into the final part. Creating a service.

Go back to ECS, select your cluster and you will see that very first tab open is the service tab. Click on Create .

Select EC2 as the launch type and give your service a name. You will notice that the task definition is selected automatically. Set the Number of Tasks to 2 . This will launch two instances of our Node app image in each of our EC2 instances. Leave the rest of the values as is and click on Next step .

This step is where we configure our Load Balancer. Select Application Load Balancer as that the the type that we have created. You will notice that our LB is automatically selected in the Load Balancer Name . Below that, you will find the container to load balance on.

Container to be added for load balancing

You will see that our container name and the port mapping is already selected. Click on Add to load balancer . A new section will be opened.

In the Production listener port , select 80:HTTP from the dropdown. And in the Target group name , select the target group that we had created while creating the load balancer.

On selecting this, it will load all the values that we had added in the target group while creating our ALB.

In the final section, uncheck the Enable service discovery integration as it's not needed. Click on Next step .

You will be taken to the auto scaling configuration. Do not auto scale now, let that be as an experiment for you after you complete this :)

Click on Next step and you will be taken to the Review of your service that will spin your node app image on the EC2 instances.

Finally, click on Create Service . This will create your service and run the task definitions that we have created. After it's completed, click on View Servie . You will see two task definitions in PENDING state.

The created service spins off two tasks

After some time when you refresh, the status will change to RUNNING . Click on the Events tab. You will get a log of the service adding the tasks to our EC2 instances.

The service logs after spinning the tasks

Once you get something like this, where the service has reached a ready state, you're good to go!

Check the Target Groups in the LOAD BALANCING section of the EC2 service. You will see that the service we have created has automatically registered two targets in our ALB target group and they are healthy.

The EC2 instances registered in the target group of our ALB

Check out the ports, they have been randomly assigned, so that's our Dynamic port mapping in action!

Last but not the least, copy the DNS name of your ALB and paste it in the browser, you will see that your node app is running and you get the API is functional message. Yay!!!

This is how we can deploy our application as a Docker Image via AWS ECS.

Thank you for reading.

Top comments (1)

pic

Templates let you quickly answer FAQs or store snippets for re-use.

duard profile image

  • Joined Sep 20, 2017

I'm getting 502 bad gateway

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink .

Hide child comments as well

For further actions, you may consider blocking this person and/or reporting abuse

arindam_1729 profile image

How to Use Google Gemini with Node.js

Arindam Majumder - Feb 15

morganney profile image

AWS Copilot CLI, the Good the Bad, and the Results.

Morgan Ney - Jan 27

femilawal profile image

Unraveling Kubernetes: From EC2 to EKS

Oluwafemi Lawal - Feb 9

andarilhoz profile image

Como eu criei um servidor de PalWorld na AWS com start por bot no Discord — Parte 2

Magno Gouveia - Jan 27

Once suspended, ryands17 will not be able to comment or publish posts until their suspension is removed.

Once unsuspended, ryands17 will be able to comment and publish posts again.

Once unpublished, all posts by ryands17 will become hidden and only accessible to themselves.

If ryands17 is not suspended, they can still re-publish their posts from their dashboard.

Once unpublished, this post will become invisible to the public and only accessible to Ryan Dsouza.

They can still re-publish the post if they are not suspended.

Thanks for keeping DEV Community safe. Here is what you can do to flag ryands17:

ryands17 consistently posts content that violates DEV Community's code of conduct because it is harassing, offensive or spammy.

Unflagging ryands17 will restore default visibility to their posts.

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Blog

Told you, we love sharing!

Dynamic Port Mapping in ECS with Application Load Balancer

  • Application Security
  • Automation Testing
  • Cloud Managed Services
  • Connected TV
  • Data & Analytics
  • Digital Analytics
  • Digital Engineering
  • Digital Marketing
  • Digital Transformation
  • Experience Design
  • Front End Development
  • Industry Buzz
  • Manual Testing
  • Marketing Automation
  • Mobile Automation Testing
  • Product Engineering
  • Software development
  • User Experience
  • Video Solutions
  • Web Content Management
  • AWS , DevOps , Technology

AWS recently launched a new Application Load Balancer (ALB) that supports Dynamic Port Mapping with ECS. It allows you to run two containers of a service on a single server on dynamic ports which ALB automatically detects and reconfigures itself.

Amazon EC2 Container Service ECS is a managed container service that allows you to run your application on docker container and manage cluster on EC2 instances . ALB works as a load balancer and distributes traffic on multiple running containers . ALB continuously monitors health check of containers, and if any container fails the health check, ALB terminates that container and starts a new one to maintain desired no of containers.

While working on a project, we were using an Elastic Load Balancer with ECS for container heath check and distribute traffic on containers. In task definition, we defined host port on which container accepts the request and same port was used as an instance port in ELB. For example, If you have a service with two containers, you need at least two ECS container instance because multiple containers can’t run on the same port on the same server, each container is hosted on a separate server.

Below are the steps we took to create an ALB and configure an ECS Service using that ALB:

1. Create an ALB and select application load balancer in the load balancer type.

Elastic Load Balancing

2. Give a name to your ALB, select a scheme – private or public, select a port on which the ALB will accept requests, select VPC & Subnet and click on next to configure security group of your ALB.

Configure Load Balancer

3. Select any existing security group or create a new security group and define port and source to allow traffic on your ALB.

4. Create a target group that will be attached to the ALB and route traffic from ALB to your container instances. You need to define port & protocol that ALB uses to route traffic to your targets in your target group and perform health checks on your targets (Instances).

port mapping ecs

5. Now your ALB is ready and it accepts request on port 80, now create a task, service and add ALB for load balancing between multiple containers.

6. Create a new Task or a new version of the existing task and set  host port 0 . It will dynamically assign any available port when it runs a docker container. You can use any public Docker image or your own Docker image.

Edit Container

7. Create a new service from the task and in ‘Configure ELB’ section, select application load balancer in ELB type section, select your ALB & target group and create service. If your ECS cluster has single ECS instance and tasks count is two, it will start two new containers in your instance on two different dynamic ports.

Configure ELB section

8. Check targets in your target group, you would see that same instance is registered two times with two different ports.

Configure ELB section

9. You can check the same on your server using the ‘docker ps’ command. Two different containers are running on two random ports by the same docker image that you mentioned in your task definition.

ALB configuration

So If you are running multiple containers  of a single service, you don’t need multiple servers for them. ALB allows to maximize the usage of servers and offers you a high-performance load balancing option. It gives you the flexibility of running multiple containers of a service on a single server by using the random available port.

Same will help in deployment also. You don’t need to run extra servers for deployment. The new version will be deployed on the same server on some different port if resources (Core & RAM) and new containers are in-service under ALB target, old containers will be terminated.

port mapping ecs

DevOps Practices and Principles To Improve IT Efficiency

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

' src=

YOU MAY ALSO LIKE

  • AWS Elastic Load Balancer Cookbook
  • Enabling OAuth 2.0 On Kong With SSL Termination on Load Balancer
  • Getting real IP while using nginx working behind Amazon Elastic Load Balancer

port mapping ecs

Get latest articles straight to your inbox

  • CloudKeeper
  • DevOps as a Service
  • DevOps on AWS
  • DevOps Tools
  • DevOps Team
  • Python Development
  • Node.js Development
  • Grails Development
  • Java Development
  • MEAN Development
  • Grails Team
  • Node.js Team
  • Email Marketing
  • Search Engine Optimization
  • Social Listening
  • Web Analytics
  • Influencer Marketing
  • Content Marketing
  • Social Media Marketing
  • Creative Strategy
  • Digital Campaigns and Activations
  • Media planning and buying
  • Search Marketing
  • Digital Strategy
  • AEM Development
  • Drupal Development
  • iOS App Development
  • Android Development
  • Smart TV App Development
  • Hybrid Application Development
  • Mobility Team
  • AngularJS Development
  • React Native Development
  • React js Development
  • Software Product Engineering
  • Video Managed Services
  • Multiscreen Solutions
  • Offshore Software Development
  • Outsourced Software Product Development
  • Custom Software Development
  • Bespoke Software Development
  • Custom Web Application Development
  • Outsource Web Application Development
  • Offshore Development Center

Mapping network events edit

Network events capture the details of one device communicating with another. The initiator is referred to as the source, and the recipient as the destination. Depending on the data source, a network event can contain details of addresses, protocols, headers, and device roles.

This guide describes the different field sets available for network-related events in ECS and provides direction on the ECS best practices for mapping to them.

Source and destination baseline edit

When an event contains details about the sending and receiving hosts, the baseline for capturing these values will be the source and destination fields.

Some events may also indicate each host’s role in the exchange: client or server. When this information is available, the client and server fields should be used in addition to the source and destination fields. The fields and values mapped under source / destination should be copied under client / server .

Network event mapping example edit

Below is a DNS network event. The source device ( 192.168.86.222 ) makes a DNS query, acting as the client and the DNS server is the destination ( 192.168.86.1 ).

Note this event contains additional details that would populate additional fields (such as the DNS Fields ) if this was a complete mapping example. These additional fields are omitted here to focus on the network details.

Source and destination fields edit

First, the source.* and destination.* field sets are populated:

Client and server fields edit

Looking back at the original event, it shows the source device is the DNS client and the destination device is the DNS server. The values mapped under source and destination are copied and mapped under client and server , respectively:

Mapping both pairs of field sets gives query visibility of the same network transaction in two ways.

  • source.ip:192.168.86.222 returns all events sourced from 192.168.86.222 , regardless its role in a transaction
  • client.ip:192.168.86.222 returns all events with host 192.168.86.222 acting as a client

The same applies for the destination and server fields:

  • destination.ip:192.168.86.1 returns all events destined to 192.168.86.1
  • server.ip:192.168.86.1 returns all events with 192.168.86.1 acting as the server

It’s important to note that while the values for the source and destination fields may reverse between events in a single network transaction, the values for client and server typically will not. The following two tables demonstrate how two DNS transactions involving two clients and one server would map to source.ip / destination.ip vs. client.ip / server.ip :

Table 1. Source/Destination

Table 2. Client/Server

Related fields edit

The related.ip field captures all the IPs present in the event in a single array:

The related fields are meant to facilitate pivoting. Since these IP addresses can appear in many different fields ( source.ip , destination.ip , client.ip , server.ip , etc.), you can search for the IP trivially no matter what field it appears using a single query, e.g. related.ip:192.168.86.222 .

Network events are not only limited to using related.ip . If hostnames or other host identifiers were present in the event, related.hosts should be populated too.

Categorization using event fields edit

When considering the event categorization fields , the category and type fields are populated using their respective allowed values which best classify the source network event.

Most event.category / event.type ECS pairings are complete on their own. However, the pairing of event.category:network and event.type:protocol is an exception. When these two fields/value pairs both used to categorize an event, the network.protocol field should also be populated:

Result edit

Putting everything together covered so far, we have a final ECS-mapped event:

Most Popular

Get Started with Elasticsearch

Intro to Kibana

ELK for Logs & Metrics

AWS re:Post

Specify port mapping for container in ECS

The container has a service which is accessed locally through localhost:8080 Now I want to deploy this container on AWS using ECS. BUT I want the external client to access the service using https There is a 'host port' and 'container port' What should the port values be and where should I set them? There is the service security group setting and the load balancer listener setting and the task itself containing the container

I am using FARGATE and load balancing

  • Most comments

Here is how to to deploy your container on AWS ECS using Fargate and allow external clients to access your service securely over HTTPS:

  • Task definition : In your task definition, set the containerPort to 8080 , as this is the port your service is listening on inside the container. You don't need to specify a hostPort when using Fargate, as it will automatically assign an available port on the host. Here's an example container definition:

Load balancer : Create an Application Load Balancer (ALB) to handle incoming HTTPS traffic and forward it to your container instances. You'll need to configure a listener on the ALB to listen for HTTPS traffic on port 443 . Make sure to attach an SSL certificate to the listener, either by importing your own or using one provided by AWS Certificate Manager (ACM).

Target group : Create a target group with the target type set to ip and the protocol set to HTTP . Specify the port as 8080 . The ALB will forward incoming HTTPS traffic to this target group, which will then route it to the appropriate container instances.

Service security group : Create a security group for your ECS service that allows inbound traffic on the assigned hostPort from the ALB's security group. This will ensure that only traffic from the ALB can reach your container instances.

Load balancer security group : Update the security group associated with your ALB to allow inbound traffic on port 443 (HTTPS) from the internet or specific IP addresses, depending on your requirements.

ECS service : When creating or updating your ECS service, configure it to use the Fargate launch type and associate it with the ALB and target group created earlier. In the networkConfiguration section, specify the subnets and security group you created for your service.

With this setup, external clients can access your service using HTTPS via the ALB, which will forward the traffic to your container instances running on Fargate. The container instances will communicate with the ALB over HTTP on port 8080 .

Let me know if you need further help.

profile picture

Maybe my difficulty is that I am using the AWS ECS console, and a lot of these things you are suggesting are not possible. After creating a cluster I do the create a service. I pick the option 'Launch Type'. Next I stick with the 'Service' option and select my task. I give the service a name and then I turn of the rollback on deployment failure. Now starts the fun

In the Networking I pick a VPC and delete the private subnets having one public subnet per zone since the ALB will croak if you have more than one subnet per zone. In the security group I have ONLY the option to pick the name and inbound rules. For the inbound rules I pick Custom TCP with a port of 8080 and a source of anywhere.

Now to the Load balancer. I pick ALB and add a listener. I set it to HTTPS and port 8080. IF I set the port to 443 it will fail.

The biggest problem is the target group for this ALB. All I can do is set its name and health check path and choose HTTP or HTTPS for the group and the health check. Nothing else, which is a major issue since the health check will fail because the server in the container will return 401 and not 200. There is no way to configure the 'advanced' settings on the target group until you create the service. Then you have to race to the CloudFormation console, select resources, find your target group, select it, get to the health check, select edit, select advanced, and add the 401 to the 'success' condition. See next:

Continued:. You have to do this fast enough so that the create service does not do a health check and fail your creation.

But the main point is that several of the options you present are not possible while creating the service. There is NO option to configure a security group for the ALB - only a target group. And the target group options are far fewer than needed.

That being said, I do not know why it works with an ALB listener set to HTTPS 8080, and a service security group set to Custom TCP 8080. Any other setting of the ports would fail EXCEPT ALB listener of HTTP 8080, but that would be unsecure.

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Relevant content

  • I want to get DNS64 working in an ECS on EC2 container. slow lg ... asked a month ago lg ...
  • AWS Batch support for EXTERNAL ECS container instances (ECS Anywhere) Vitalii Kryvenko lg ... asked 2 months ago lg ...
  • ECS multi-container stopped but when aligned with one container it runs. nitin lg ... asked 4 months ago lg ...
  • CDK ECS Fargate - how to specify arguments to Docker container? Manav lg ... asked 2 years ago lg ...

AWS OFFICIAL

Let's Learn DevOps

Ecs cluster with dynamic port mappings using terraform.

Published Mon, Dec 24, 2018 by Mark Burke

Creating an ECS cluster with Terraform

AWS introduced dynamic port mapping for ECS around 18 months ago. This is a very useful feature that allows you to run multiple containers with the same port on the same host. This feature is only avaliable with the AWS application load balancer. The ALB automatically detects and reconfigures itself to map to a port inbetween 32768 and 65535.

So let’s automate creating an ECS cluster with Terraform.

You can also find the code at my GitHub HERE .

First let’s set up our provider

provider.tf

We then have the file that creates a new VPC

Next we have our configuration for the Application Load Balancer

Next we set the AMI we use and update the launch configuration of the cluster

Next we add the configuration for our ECS cluster

ecs-cluster.tf

And our ecs-cluster.tpl file

Next we add our ECS service and task and Cloud Watch configurations.

ecs-nginx.tf

Next on the list is to add the IAM roles for the EC2 intances so they can communicate with the ECS service.

Next we have our security groups. From here we can observe that the load balancer is open to the world on tcp/80 and tcp/443 and the ECS EC2 instances have ports 32768 to 65535 open from the load balancer. This is because when we select the container port to 0 in the task definition AWS will randomly assign a port from this range to the container.

security.tf

Lastly we have our variables file. Here you can change your region and the instance type you decide to use.

So now it’s time to run terraform apply

We should get an output with the address of the load balancer

So now if we head over to our target group in AWS we can see that we have 4 healthy running tasks all listining on the same port (tcp/80) balanced evenly across two separate instances.

Centered image

Don’t forget to run terraform destroy if you are just learning this.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

terraform: Configuring load-balancer to use dynamic port of ECS task/service in AWS

This is sort-of a general question for how dynamic port assignments are supposed to work, though my specific context is trying to figure-out if there is a natural way for a target-group to know the dynamically-assigned port of the service without having to do some manual piping to tell it.

The documentation for ECS dynamic port assignment ( https://aws.amazon.com/premiumsupport/knowledge-center/dynamic-port-mapping-ecs ) states that you just have to set the host-port to (0) in the task-definition, no port needs to be specifically provided to the target group, and implies that it should just magically work. I've tried this before, and I couldn't get things to talk. I can't specifically remember where the breakdown was.

Now I'm trying to use Terraform to do it, and my issue is that, yes, I can set the task-definition to have a port of (0) but the port argument in the target-group resource is required to be present and non-zero. So, how is the other side of the dynamic port assignment supposed to work? I'm assuming that AWS solves the whole problem. Or, is it just that the dynamic port assignment just comes up with the port-assignment half but that automation is required to provide that port to the other side, and AWS doesn't have a mechanism to do this for you? It seems like an obvious question that, for some reason, no one has posted any documentation/discussion for. I could use some clarification.

I'm specifically using an ALB (application load balancer) but it may not matter.

  • amazon-web-services

Dustin Oprea's user avatar

Keep in mind that I arrived here with a question of my own (which you will soon see), so I may not be the best to adequately answer yours, but... The short version is "it just magically works".

When you Terraform the load balanced Service (which references a TaskDef), you have to attach an ALB. Attaching this ALB requires a container name and port:

In the TaskDef, your container has a port (eg 8080), but the hostport is set to 0 so that you get a random port assignment on the Instance. The ECS Agent will automatically handle the updating of the Targets in the Target Group for you.

However, when the Service with the 'load_balancer' clause is being instantiated, you technically don't know any of the "ephemeral ports" (eg the mapped high-ports) with which to create the ALB's Target Group framework. Those ports haven't been assigned yet, and won't exist until the first Task is created. You can't use 0 because it wants a container port, not an Instance port.

The solution is to use the literal container port (eg 8080) here. This technique works. The Service is instantiated and creates a Target Group with Instances referencing the unmapped port 8080. Later, the ECS Agent comes along and, as Tasks are created, backfills with other Targets using the working ephemeral ports.

The only weird thing is in creating one Target per Instance pointing to the unmapped 8080, which is in a continual state of unhealthiness for obvious reasons. There is no cleanup action. The other Targets are fine and so the unhealthy are ignored. They also do not factor into the Desired count for AutoScaling. But I'd love to know if there were a way to clean these up in automation.

  • I can manually unregister each failed unmapped Target, but this is a hassle.
  • Under the hood, what I've found this Terraform does is associate the ALB with both the Service and the AutoScale group. So I can also go to AutoScale and detach the ALB -- leaving the ALB on Service intact -- which kills all unhealthy unmapped Targets in one fell swoop.

Most of the time, though, I just leave them as-is...

DarkSideGeek's user avatar

  • Thanks for sharing. Accepting the presence of magic and one wasted instance for every deployment is going to be an irritation on principle. I'd especially be concerned about that one instance being a red herring that we lose time investigating, over the long turn, every time we forget about it :) . –  Dustin Oprea Oct 8, 2022 at 21:10
  • I think you misunderstood. There is no "wasted instance". If you want 4 instances in your ECS cluster, you'll get and use 4 instances. No waste. The irritation is that each instance initializes a Target Group target whose healthcheck references the unmapped (non-ephemeral) Instance port which, although has no hope of ever getting healthy, is also rather benign. Which is why you can ignore them (unless you are OCD like me). The code above is good, not magic. The magic is in the semi-undocumented behavior under-the-hood. And if the explanation is good, please mark as answered. –  DarkSideGeek Oct 11, 2022 at 3:31
  • Got it. I voted it up, but I'll reserve the accepted answer for some theoretical answer, possibly not existing until a long time from now, that solves the issue in a savor way. Still, your method/observation is a useful contingency if the behavior is required in any form. –  Dustin Oprea Oct 12, 2022 at 7:50

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged amazon-web-services terraform amazon-ecs amazon-alb ..

  • The Overflow Blog
  • Who owns this tool? You need a software component catalog
  • Down the rabbit hole in the Stack Exchange network
  • Featured on Meta
  • Upcoming privacy updates: removal of the Activity data section and Google...
  • Changing how community leadership works on Stack Exchange: a proposal and...

Hot Network Questions

  • Is Freyd's thesis available online anywhere?
  • What's the source of John Adams's quote against the two-party system?
  • How can I make a sleek, 3D look in Inkscape Vector for a ring?
  • Can a young adult woman who is 20 call themselves 乙女 or otome
  • How do I read the last lines of a huge log file?
  • Effectiveness of Requiring Students to Repeat Proofs Presented in Class
  • "They don’t speak it so much my side of the park." Which park? Which side is which?
  • Workholding vacuum only on a portion of the workpiece
  • How can I understand neutrino mixing and the difference between flavour and mass eigenstates?
  • Is there a word for an object orbiting a brown dwarf?
  • What would the collapse of a neutron star into a black hole look like from the center?
  • What investments for US expats don't trigger taxes on unrealized gains
  • Get string from withing curly brackets
  • Has the Niger junta explained why they expelled French but not US troops?
  • Why is Europe (Germany in particular) apparently paying so little for US troop presence/protection, compared to South Korea?
  • Marking circle centres with + and x
  • Calculate Riemann integral
  • What useful information can court judgments provide to intelligence agencies?
  • How to answer vague "tell me about x" questions from recruiter
  • Can AI win against humans in competitive multiplayer computer games
  • How can I properly make a bash prompt with error colors and emoji?
  • How do I write a sexist narrator without coming off as sexist myself?
  • Hydrazine and liquid fluorine as a semi-cryogenic storable lunar space propellant
  • My PhD supervisor gave up on me

port mapping ecs

IMAGES

  1. Port mapping || Fixed & random Port || Two ways

    port mapping ecs

  2. Port Mapping

    port mapping ecs

  3. Bridge mode :: Amazon ECS Workshop

    port mapping ecs

  4. Understanding Dynamic Port Mapping in Amazon ECS with Application Load Balancer!

    port mapping ecs

  5. SoftPerfect Bandwidth Manager

    port mapping ecs

  6. [Solved] AWS ECS Fargate and port mapping

    port mapping ecs

VIDEO

  1. Is ngrok dead? 💀

  2. Développement et Réalisations Majeures dans la Région de Tadjourah

  3. Minecraft: BACON FARM RACE OF ISANITY

  4. Visit of Japanese Prime Minister Fumio Kishida to the Philippine Coast Guard

  5. HOW TO INSTALL CSSO 0.5 ANDROID

  6. 🏙️ PORT MARITIME !

COMMENTS

  1. PortMapping

    Port mappings allow containers to access ports on the host container instance to send or receive traffic. Port mappings are specified as part of the container definition. If you use containers in a task with the awsvpc or host network mode, specify the exposed ports using containerPort.

  2. Set up dynamic port mapping for Amazon ECS

    To set up dynamic port mapping, follow these steps: Create an Application Load Balancer and a target group. Important: To route health check traffic correctly when you create a target group, expand Advanced health check settings. For Port, select traffic port.

  3. amazon web services

    port mapping for AWS ECS Ask Question Asked 3 years, 1 month ago Modified Viewed 16k times Part of AWS Collective 8 I am new to ecr/ecs/ec2. I am starting to deploy my app to domain i've purchased. My application has a front end and a backend. My front end runs on localhost:3000 and calls on its backend api on localhost 5000.

  4. Understanding Dynamic Port Mapping in Amazon ECS with ...

    Dynamic port mapping with an Application Load Balancer makes it easier to run multiple tasks on the same Amazon ECS service on an Amazon ECS cluster.With the Classic Load Balancer, you must statically map port numbers on a container instance.

  5. How do I set up dynamic port mapping for Amazon ECS?

    To set up dynamic port mapping, follow these steps: Create an Application Load Balancer and a target group. Important: To route health check traffic correctly when you create a target group, expand Advanced health check settings. For Port, select traffic port.

  6. How do I set up dynamic port mapping for Amazon ECS?

    For more details on this topic, see the Knowledge Center article associated with this video: https://repost.aws/knowledge-center/dynamic-port-mapping-ecsSai ...

  7. AWS Elastic Container Service (ECS) with ALB and dynamic port mapping

    Architecure to create a fault tolerant scalable service to deploy containers by using Amazon ECS. With dynamic port mapping option same types of multiple con...

  8. Deploy a Node app to AWS ECS with Dynamic Port mapping

    Create an ECS cluster with Dynamic port mapping that deploys your app as a Docker image and the load balancer serves your traffic. Tagged with docker, aws, ecs, node. ... You will see that our container name and the port mapping is already selected. Click on Add to load balancer. A new section will be opened. In the Production listener port, ...

  9. Dynamic Port Mapping in ECS with Application Load Balancer

    1. Create an ALB and select application load balancer in the load balancer type. 2. Give a name to your ALB, select a scheme - private or public, select a port on which the ALB will accept requests, select VPC & Subnet and click on next to configure security group of your ALB. 3.

  10. Mapping network events

    Mapping network events. edit. Network events capture the details of one device communicating with another. The initiator is referred to as the source, and the recipient as the destination. Depending on the data source, a network event can contain details of addresses, protocols, headers, and device roles. This guide describes the different ...

  11. Specify port mapping for container in ECS

    1 Answer. Here is how to to deploy your container on AWS ECS using Fargate and allow external clients to access your service securely over HTTPS: Task definition: In your task definition, set the containerPort to 8080, as this is the port your service is listening on inside the container. You don't need to specify a hostPort when using Fargate ...

  12. ECS Cluster With Dynamic Port Mappings Using Terraform

    Creating an ECS cluster with Terraform AWS introduced dynamic port mapping for ECS around 18 months ago. This is a very useful feature that allows you to run multiple containers with the same port on the same host. This feature is only avaliable with the AWS application load balancer. The ALB automatically detects and reconfigures itself to map to a port inbetween 32768 and 65535.

  13. terraform: Configuring load-balancer to use dynamic port of ECS task

    Now I'm trying to use Terraform to do it, and my issue is that, yes, I can set the task-definition to have a port of (0) but the port argument in the target-group resource is required to be present and non-zero. So, how is the other side of the dynamic port assignment supposed to work? I'm assuming that AWS solves the whole problem.

  14. AWS ECS dynamic port mapping + nginx

    I have a typical ECS infrastructure with a single app behind an ALB. I leverage dynamic host mapping for CD process (ECS can deploy a new container on the same host without port collision). Now I want to add an nginx container in front of it (for SSL from ALB to EC2). The problem is, in nginx config, I have to specify the app endpoint with the ...

  15. Dynamic port mapping for ECS tasks

    Dynamic port mapping for ECS tasks. I want to run a socket program in aws ecs with client and server in one task definition. I am able to run it when I use awsvpc network mode and connect to server on localhost every time. This is good so I don't need to know the IP address of server. The issue is server has to start on some port and if I run ...