Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

How to deploy docker container and do port mapping/forward using kubernetes YAML manifest

Please help me to convert the below docker Command to Kubernetes YAML file along with port mapping/forwarding to the docker container

I tried the configuration below:

enter image description here

But not getting any result.

I need experts here to tell me if the above deployment file is incorrect and if yes what could be the possible changes I can do here to get the results. I have tried several other combination's and I'm not getting any results.

Note: container gets deployed but the port mapping/forwarding is not working. That is where I'm stuck and seeking help .

port mapping kubernetes yaml

  • google-kubernetes-engine

Dave M's user avatar

  • 1 We are not here to do your job for you. –  Michael Hampton Jul 26, 2021 at 8:54
  • Hi Michael Hampton, Please note that i dont have any intention to waste r time here and im not even asking you to do my job. I might have posted the Question incorrectly, so i apologize for the same. I searched a lot for the solution and didnt find one. So im taking help on the Forum to find an answer. Plz help me if you can –  anupjohari9211 Jul 26, 2021 at 10:39

If we specify a NodePort service, Kubernetes will allocate a port on every node. The chosen NodePort will be visible in the service spec after creation. Alternatively, one can specify a particular port to be used as NodePort in the spec while creating the service. If a specific NodePort is not specified, a port from a range configured on the Kubernetes cluster (default: 30000-32767) will be picked at random.

In Kubernetes you can define your ports using # port label. This label comes under port configuration in your deployment. According to the configurations you can simply define any number of ports you wish. Following example shows how to define two ports.

To do a port forward to local host run the following command.

For more information refer to the links for Docker container port forwarding and node ports .

Srividya's user avatar

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged google-cloud-platform kubernetes google-compute-engine google-kubernetes-engine google-app-engine ..

  • The Overflow Blog
  • Down the rabbit hole in the Stack Exchange network
  • Even LLMs need education—quality data makes LLMs overperform
  • Featured on Meta
  • Upcoming privacy updates: removal of the Activity data section and Google...
  • Changing how community leadership works on Stack Exchange: a proposal and...

Hot Network Questions

  • Who is Dracula's son as mentioned in the song Monster Mash?
  • When was Hermann Hesse arrested for ‘seducing a young girl’?
  • How can Transformers handle random sequences?
  • German pronunciation of English words
  • Passive of verbs that take dative cases
  • What was the value of a late 16th century Spanish Ducat?
  • Should I include the article I co-authored in my resume?
  • I don't dare install Ubuntu 22 LTS on my HP laptop unless I solve this Firefox issue
  • Rigidy body for an animated mesh with curve
  • Science Fiction movie which used Eagles from “Space: 1999?”
  • How would one design a superhuman?
  • Possibly Inappropriate Question on Performance Review
  • Should I use high-level programming languages even if I dislike them?
  • Use of なる without an indirect object (marked by に)
  • On shortest vector problem
  • Refrigerator life of live clams
  • How to design a circuit to measure really small currents (nanoamperes for example)?
  • How do magic mirrors work?
  • Adding 2 Levels of Chameleon to an Artificer Build: What Are the Benefits?
  • How do I protect outdoor CAT6 PoE from rain?
  • What do people mean they say "This Court"?
  • Can a key signature express Phrygian mode (not just major or minor)?
  • Differentiating an argument from authority from expert testimony
  • What are some better suggestion for translating "edible" to Chinese?

port mapping kubernetes yaml

Word on the Cloud: Keeping you up-to-date on cloud native. Short & sharp!

nigelpoulton_logo_22_colour

Explained: Kubernetes Service Ports

  • Nigel Poulton
  • August 17, 2020

port mapping kubernetes yaml

While I was creating the brand new version of my  Getting Started with Kubernetes  course on Pluralsight (right click the link and open in new window!!!), I realised port mappings on  Kubernetes Service objects  can be confusing. So here goes with an explanation…

Kubernetes has three major Service types:  ClusterIP ,  NodePort , and  LoadBalancer …

This is the default and most basic type. Its job is to provide a  stable IP and port  that passes traffic to Pods/containers on the same cluster. The stable IP and port are only accessible from other Pods running in the cluster. 

We define it like this in a YAML file:

port mapping kubernetes yaml

The YAML defines two ports:

port  is the stable port the Service exposes inside the cluster — other Pods in the cluster send traffic to this port (8080 in our example).  targetPort  is the port that the application listens on in the Pods/containers.

The diagram below shows how other Pods send traffic to port 8080 and how the Service redirects that to port 80 in the Pods/containers.

port mapping kubernetes yaml

If you don’t specify a  targetPort  value, it defaults to the value specified in  port .

A  NodePort  Service exposes an app to the outside world via a port mapping on every node in the cluster. It looks like this in a YAML file:

port mapping kubernetes yaml

This time the YAML defines three ports:

port  and  targetPort  work the same as they do with ClusterIP Services. The  nodePort  is a TCP/UDP port between 30,000 and 32,767 that is mapped on every cluster node and exposes the Service outside of the cluster. Basically, any client outside of the cluster can hit any cluster node on the nodePort value (31111 in our example) and reach the ClusterIP Service inside the cluster and eventually reach Pods/containers.

The diagram below shows how external clients send traffic to cluster nodes on port 31111, get routed to the ClusterIP Service on port 8080, and eventually to a Pod/container listening on port 80.

port mapping kubernetes yaml

It’s important to understand that NodePort Services build on top of ClusterIP Services. However, when you define a NodePort Service, Kubernetes takes care of creating any ClusterIPs and mapping ports etc.

LoadBalancer

Last but not least, Kubernetes offers a  LoadBalancer Service . This builds on top of  NodePort  and  ClusterIP  constructs and exposes a Service to the internet via one of your cloud’s native load balancers. 

It’s important to understand that a Kubernetes LoadBalancer Service will build an internet-facing load balancer on your cloud platform as well as all the constructs required to route traffic all the way back to Pods/containers running in your Kubernetes cluster.

They’re defined like this in YAML files.

port mapping kubernetes yaml

This time the YAML only defines two ports:

port  is the port the cloud load balancer will listen on (8080 in our example) and  targetPort  is the port the application is listening on in the Pods/containers. Kubernetes works with your cloud’s APIs to create a load balancer and everything needed to get traffic hitting the load balancer on port 8080 all the way back to the Pods/containers in your cluster listening on targetPort 80.

Behind the scenes, many implementations create NodePorts to glue the cloud load balancer to the cluster. The traffic flow is usually like this.

port mapping kubernetes yaml

As you can see, LoadBalancer Services build on top of NodePorts which in turn build on top of ClusterIPs.

If you’re hungry for more, including examples and animated explanations, see my  Getting Started with Kubernetes course on Pluralsight  (right click and open in new window!!!). And feel free to reach on out the various socials where I’m more than happy to talk technology all day!

@nigelpoulton

Share this post

port mapping kubernetes yaml

Special Editions

port mapping kubernetes yaml

Quick Links

Follow nigel, word on the cloud: what's going on in cloud native.

Nigel’s Keeping you up-to-date on cloud native. Short & sharp! #Docker #Kubernetes #WebAssembly #Wasm

Privacy Overview

Looking for something specific?

Try the search facility.

Kubernetes 101 for developers: Names, ports, YAML files, and more

Don Schenck profile picture on GitHub

Once you've written your first container-based application and have it running in Docker or Podman, you're ready to move to the next level. That means multiple applications— microservices —running within a managed environment. Kubernetes , an open source container orchestration platform, is just such an environment, and by far the most popular one at that. Let's consider it from a developer's perspective.

Run multiple containers on Kubernetes

Running a container with Docker or Podman is great, but it's only a start. To implement an entire system, you need several services (or microservices, if you will). Kubernetes can orchestrate the entire system in a namespace, giving you both isolation from other namespaces and a shared environment within a namespace. You can just plop your applications into your Kubernetes cluster and let it run. Well ... it might not be that simple, but that's not a bad generalization.

Kubernetes port management

Imagine you're running multiple services in Kubernetes, and those services use network ports for communications—a Web API or a database, for example. Kubernetes will automatically allow you to use the same port number for multiple services. This is fantastic for developers. You don't need to remember that "this API uses port 8080, this other one uses 8082" and so on. Instead, you simply assign a port and let Kubernetes worry about it. Eight applications that all use port 443? No problem.

Kubernetes names

Once you have those services running, Kubernetes helps make life easier for you by allowing you to reference any other service by the name you assign to it. You don't need to know IP addresses or some long, convoluted name. You named the service getcustomer ? Well, then ... you reference it as getcustomer , no matter where it's running within Kubernetes. Even after that service scales up to several running instances, you don't need to concern yourself with that. Just call the service by its name; Kubernetes will do the load balancing.

Kubernetes Secrets

When dealing with sensitive information like passwords and connect strings in your code, Kubernetes once again makes life easy for the developer. When you establish a Secret in Kubernetes, you can assign an environment variable name to it. Then, in your code, you simply use the value of that environment variable as the value of the Secret.

So easy. So nice.

Kubernetes rolling updates

Kubernetes, out of the box, supports what's known as a rolling update . This means that you can start a new version of your application while the older version is running. Once the new version is ready, Kubernetes will automagically transfer traffic over to your new version.

As a developer, you might think this is more of a big deal for the operations side of things, but here's the thing: Rolling updates are great when you're developing code and desk testing. While you're working at your local PC and want to try the new version of your application, you simply move it to Kubernetes and let the rolling update handle it. No need to stop this, start that, change routing, etc., etc. As a developer, this makes life much easier. It seems like a trivial thing, but you'll get spoiled very quickly.

Kubernetes and dependencies

We're all too familiar with the old "It works on my PC" problem: You carefully craft an artisanal service that works perfectly on your workstation, and after you have gently deployed it to a server, it crashes, because one of the dependencies on the server is the wrong version. But you can't change that or another application might break. Cue frustration and complexity.

Or, you could build a container that holds all the dependencies you need and deploy it to Kubernetes. Done. No conflict, no frustration, reduced complexity. For a developer, this is wonderful.

Kubernetes YAML files

Yes, yes, we developers are all about source code. But what about using source code to deploy our applications? Is that a thing?

With Kubernetes, it is. Alongside your Java (or C# or Node.js or Python or...) code, you'll be creating one or more YAML files to define the objects and environment your application needs in Kubernetes.

But why should a developer care? Isn't this the realm of operations?

Well, it's good for a developer because it makes your development and desk testing completely repeatable and consistent. And when you're finished, you have code to turn over to the operations folks, who can tweak it, improve it, and get it ready for production. That same code is then available to you for any future work. It's a cycle, and it's helpful for everyone, and it has a name: DevOps .

Reading is fine, but you can actually use Kubernetes for free by taking advantage of our free offering, the Developer Sandbox for Red Hat OpenShift . ( Red Hat Openshift is a Kubernetes distribution focused on developer experience and application security that's platform agnostic.) We even have an activity that you can do to get started with Kubernetes .

  • Red Hat Enterprise Linux
  • Red Hat OpenShift
  • Red Hat Ansible Automation Platform
  • See all products
  • See all technologies
  • Developer Sandbox
  • Developer Tools
  • Interactive Tutorials
  • API Catalog
  • Operators Marketplace
  • Learning Resources
  • Cheat Sheets

Communicate

  • Contact sales
  • Find a partner

Report a website issue

  • Site Status Dashboard
  • Report a security problem

RED HAT DEVELOPER

Build here. Go anywhere.

We serve the builders. The problem solvers who create careers with code.

Join us if you’re a developer, software engineer, web designer, front-end designer, UX designer, computer scientist, architect, tester, product manager, project manager or team lead.

Red Hat legal and privacy links

  • About Red Hat
  • Contact Red Hat
  • Red Hat Blog
  • Diversity, equity, and inclusion
  • Cool Stuff Store
  • Red Hat Summit
  • Privacy statement
  • Terms of use
  • All policies and guidelines
  • Digital accessibility

Port, TargetPort, and NodePort in Kubernetes

There are several different port declaration fields in Kubernetes. This is a quick overview of each type, and what each means in your Kubernetes YAML.

Pod port list

This array, defined in pod.spec.containers[].ports , provides a list of ports that get exposed by the container. You don’t really need to specify this list—even if it’s empty, as long as your containers are listening on the port, they’ll still be available for network access. This just provides some extra information to Kubernetes.

Service ports list

The service’s service.spec.ports list configures which requests to a service port get forwarded to which ports on its pods. A successful request can be made from outside the cluster to the node’s IP address and service’s nodePort , forwarded to the service’s port , and received on the targetPort by the pod.

This setting makes the service visible outside the Kubernetes cluster by the node’s IP address and the port number declared in this property. The service also has to be of type NodePort (if this field isn’t specified, Kubernetes will allocate a node port automatically).

Expose the service on the specified port internally within the cluster. That is, the service becomes visible on this port, and will send requests made to this port to the pods selected by the service.

This is the port on the pod that the request gets sent to. Your application needs to be listening for network requests on this port for the service to work.

port mapping kubernetes yaml

Learn Kube Faster. Get the #1 guide.

Get my book on Kubernetes for software developers, used by engineers at Google, Microsoft, and IBM.

Meet the Author

port mapping kubernetes yaml

Email: [email protected]

Email List: Sign Up

Twitter: @_matthewpalmer

Github: matthewpalmer

Port Forwarding

Skaffold has built-in support for forwarding ports from exposed Kubernetes resources on your cluster to your local machine when running in dev , debug , deploy , or run modes.

Automatic Port Forwarding

Skaffold supports automatic port forwarding the following classes of resources:

  • user : explicit port-forwards defined in the skaffold.yaml (called user-defined port forwards )
  • services : ports exposed on services deployed by Skaffold.
  • debug : debugging ports as enabled by skaffold debug for Skaffold-built images.
  • pods : all containerPort s on deployed pods for Skaffold-built images.

Skaffold enables certain classes of forwards by default depending on the Skaffold command used. These defaults can be overridden with the --port-forward flag, and port-forwarding can be disabled with --port-forward=off .

Compatibility Note

User-defined port forwarding.

Users can define additional resources to port forward in the skaffold config, to enable port forwarding for

  • additional resource types supported by kubectl port-forward e.g. Deployment or ReplicaSet .
  • additional pods running containers which run images not built by Skaffold.

For example:

For this example, Skaffold will attempt to forward port 8080 to localhost:9000 . If port 9000 is unavailable, Skaffold will forward to a random open port.

Note about forwarding System Ports

Skaffold will request matching local ports only when the remote port is > 1023 . So a service on port 8080 would still map to port 8080 (if available), but a service on port 80 will be mapped to some port ≥ 1024 .

User-defined port-forwards in the skaffold.yaml are unaffected and can bind to system ports.

Note about user-defined port-forwarding for Docker deployments

Skaffold will run kubectl port-forward on each of these resources in addition to the automatic port forwarding described above. Acceptable resource types include: Service , Pod and Controller resource type that has a pod spec: ReplicaSet , ReplicationController , Deployment , StatefulSet , DaemonSet , Job , CronJob .

Skaffold will run kubectl port-forward on all user defined resources. kubectl port-forward will select one pod created by that resource to forward too.

For example, forwarding a deployment that creates 3 replicas could look like this:

portforward_deployment

If you want the port forward to to be available from other hosts and not from the local host only, you can bind the port forward to the address 0.0.0.0 :

Using Kubernetes Port, TargetPort, and NodePort

ITIL 4

(This article is part of our Kubernetes Guide . Use the right-hand menu to navigate.)

Port configurations for Kubernetes Services

In Kubernetes there are several different port configurations for Kubernetes services :

  • Port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.
  • TargetPort is the port on which the service will send requests to, that your pod will be listening on. Your application in the container will need to be listening on this port also.
  • NodePort exposes a service externally to the cluster by means of the target nodes IP address and the NodePort. NodePort is the default setting if the port field is not specified.

Let’s look at how to use these ports in your Kubernetes manifest.

Interested in Enterprise DevOps? Learn more about DevOps Solutions and Tools with BMC. ›

Using port, targetport, and nodeport.

From the above examples the hello-world service will be exposed internally to cluster applications on port 8080 and externally to the cluster on the node IP address on 30036. It will also forward requests to pods with the label “app: hello-world” on port 80.

The configuration of the above settings can be verified with the command:

port mapping kubernetes yaml

To test and demonstrate the above configuration, we can create a pod running an ubuntu container to execute some curl commands to verify connectivity.

From this pod run the following commands:

Curl the service on the ‘port’ defined in the Kubernetes manifest for the service.

This proves that curling the Kubernetes service on port 80 forwards the request to our nginx pod listening on port 80.

To test the NodePort on your machine (not in the ubuntu pod) you will need to find the IP address of the node that your pod is running on.

port mapping kubernetes yaml

Now, you can curl the Node IP Address and the NodePort and should reach the nginx container running behind the Kubernetes service.

Additional resources

For more on Kubernetes, explore these resources:

  • Kubernetes Guide , with 20+ articles and tutorials
  • BMC DevOps Blog
  • The State of Kubernetes in 2020
  • Bring Kubernetes to the Serverless Party
  • How eBay is Reinventing Their IT with Kubernetes & Replatforming Program

Beginning Kubernetes: Knowledge & Tutorials for Getting Started

In this comprehensive e-book, we take a deep dive into the distributed computing platform Kubernetes, also known as K8s.

port mapping kubernetes yaml

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

See an error or have a suggestion? Please let us know by emailing [email protected] .

BMC Brings the A-Game

BMC works with 86% of the Forbes Global 50 and customers and partners around the world to create their future. With our history of innovation, industry-leading automation, operations, and service management solutions, combined with unmatched flexibility, we help organizations free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead. Learn more about BMC ›

You may also like

Test Automation Frameworks: The Ultimate Guide

DevOps Interview Questions

ITIL 4

Kubernetes Compute Resources Explained

Devops branching strategies explained.

port mapping kubernetes yaml

The State of SRE

devops guide

DevOps Team Structure

port mapping kubernetes yaml

Serverless Architecture: The Beginner’s Guide

About the author.

' src=

Dan Merron is a seasoned DevOps Consulting Professional with experience in private, public and financial sectors working with popular container, automation and scripting tools. You can visit his website or find him on Github or LinkedIn .

NEW! Mirantis Academy -   Learn confidently with expert guidance and On-demand content.   Learn More

PUBLIC SECTOR

Introduction to YAML: Creating a Kubernetes deployment

image

  • YAML Basics

Creating a Kubernetes Pod using YAML

Creating a kubernetes deployment using yaml, updating a deployment.

  • Other ways to scale a Deployment

What we’ve seen so far

Ready for more about YAML? This webinar helps you get started and shows how YAML is used in defining basic Kubernetes deployments.

In previous articles, we’ve been talking about how to use Kubernetes to spin up resources. So far, we’ve been working exclusively with the CLI, but there’s an easier and more useful way to do it: creating configuration files using kubernetes YAML. In this article, we’ll look at how Kubernetes YAML works and use it to define first a Kubernetes Pod, and then a Kubernetes Deployment.

Kubernetes YAML Basics

It’s difficult to escape YAML if you’re doing anything related to many software fields — particularly Kubernetes, SDN, and OpenStack. YAML, which stands for Yet Another Markup Language, or YAML Ain’t Markup Language (depending who you ask) is a human-readable text-based format for specifying configuration-type information. For example, in this article, we’ll pick apart the Kuberenetes YAML definitions for creating first a Pod, and then a Deployment.

Defining a Kubernetes Manifest

When defining a Kubernetes manifest , YAML gives you a number of advantages, including:

Convenience : You’ll no longer have to add all of your parameters to the command line

Maintenance : YAML files can be added to source control, such as a Github repository so you can track changes

Flexibility : You’ll be able to create much more complex structures using YAML than you can on the command line

YAML is a superset of JSON, which means that any valid JSON file is also a valid YAML file. So on the one hand, if you know JSON and you’re only ever going to write your own YAML (as opposed to reading other people’s) you’re all set. On the other hand, that’s not very likely, unfortunately. Even if you’re only trying to find examples on the web, they’re most likely in (non-JSON) YAML, so we might as well get used to it. Still, there may be situations where the JSON format is more convenient, so it’s good to know that it’s available to you.

Fortunately, YAML is relatively easy to learn. There are only two types of structures you need to know about in YAML:

That’s it. You might have maps of lists and lists of maps, and so on, but if you’ve got those two structures down, you’re all set. That’s not to say there aren’t more complex things you can do , but in general, this is all you need to get started.

Let’s start by looking at YAML maps. Maps let you associate name-value pairs, which of course is convenient when you’re trying to set up configuration information. For example, you might have a config file that starts like this:

The first line is a separator, and is optional unless you’re trying to define multiple structures in a single file. From there, as you can see, we have two values, v1 and Pod, mapped to two keys, apiVersion, and kind.

This kind of thing is pretty simple, of course, and you can think of it in terms of its JSON equivalent:

Notice that in our YAML version, the quotation marks are optional; the processor can tell that you’re looking at a string based on the formatting.

You can also specify more complicated structures by creating a key that maps to another map, rather than a string, as in:

In this case, we have a key, metadata, that has as its value a map with 2 more keys, name and labels. The labels key itself has a map as its value. You can nest these as far as you want to.

The YAML processor knows how all of these pieces relate to each other because we’ve indented the lines. In this example I’ve used 2 spaces for readability, but the number of spaces doesn’t matter — as long as it’s at least 1, and as long as you’re CONSISTENT. For example, name and labels are at the same indentation level, so the processor knows they’re both part of the same map; it knows that app is a value for labels because it’s indented further.

Quick note: NEVER use tabs in a YAML file.

So if we were to translate this to JSON, it would look like this:

Now let’s look at lists.

YAML lists are literally a sequence of objects. For example:

As you can see here, you can have virtually any number of items in a list, which is defined as items that start with a dash (-) indented from the parent. So in JSON, this would be:

And of course, members of the list can also be maps:

So as you can see here, we have a list of container “objects”, each of which consists of a name, an image, and a list of ports (It might also include network information). Each list item under ports is itself a map that lists the containerPort and its value.

For completeness, let’s quickly look at the JSON equivalent:

As you can see, we’re starting to get pretty complex, and we haven’t even gotten into anything particularly complicated! No wonder YAML is replacing JSON so fast.

So let’s review. We have:

maps, which are groups of name-value pairs

lists, which are individual items

maps of maps

maps of lists

lists of lists

lists of maps

Basically, whatever structure you want to put together, you can do it with those two structures.

Mirantis simplifies Kubernetes.

From the world’s most popular Kubernetes IDE to fully managed services and training, we can help you at every step of your K8s journey.

Connect with a Mirantis expert to learn how we can help you.

port mapping kubernetes yaml

OK, so now that we’ve got the basics out of the way, let’s look at putting this to use. We’re going to first create a Pod, then a Deployment, using YAML.

If you haven’t set up your cluster and kubectl, go ahead and check out this article series on setting up Kubernetes on your server before you go on. It’s OK, we’ll wait….

Back already? Great! Let’s start with a Pod.

Creating the Kubernetes Pod YAML deployment file

In our previous example, we described a simple Pod using YAML which we can save locally. Open that file:

Taking it apart one piece at a time, we start with the API version; here it’s just v1. (When we get to deployments, we’ll have to specify a different version because Deployments don’t exist in v1.)

Next, we’re specifying that we want to create a Pod to hold your application or cloud service; we might specify instead a Deployment, Job, Service, and so on, depending on what we’re trying to achieve.

Next, we specify the metadata. Here we’re specifying the name of the Pod, as well as the label we’ll use to identify the pod to Kubernetes.

Finally, we’ll configure the actual objects that make up the pod. The spec property includes any containers, memory requirements, storage volumes , network or other details that Kubernetes needs to know about, as well as properties such as whether to restart the container if it fails. You can find a complete list of Kubernetes Pod properties in the Kubernetes API specification , but let’s take a closer look at a typical container definition:

In this case, we have a simple, fairly minimal definition: a name (front-end), the image on which it’s based (nginx), and one port on which the container will listen internally (80). Of these, only the name is really required, but in general, if you want it to do anything useful, you’ll need more information.

You can also specify more complex properties, such as a command to run when the container starts, arguments it should use, a working directory, or whether to pull a new copy of the image every time it’s instantiated. You can also specify even deeper information, such as the location of the container’s exit log. Here are the properties you can set for a container, which you can find in the Kubernetes YAML Reference:

volumeMounts

livenessProbe

readinessProbe

terminationMessagePath

imagePullPolicy

securityContext

Now let’s go ahead and actually create the pod.

Creating the pod using the YAML file

The first step, of course, is to go ahead and create a text file locally. Call it pod.yaml and add the following text, just as we specified earlier:

Save the file. Now we need to deploy the manifests.

Kubernetes YAML Deployment Example

Tell Kubernetes to rollout the YAML file’s manifests using the CLI:

As you can see, K8s references the name we gave the Pod. You can see that if you ask for a list of the pods in the default namespace:

If you check early enough, while K8s is still deploying, you can see that the workload is still being created. After a few seconds, you should see the pods running:

From here, you can test out the Pod (just as we did in the previous article ), but ultimately we want to create a Kubernetes Deployment example , so let’s go ahead and delete it so there aren’t any name conflicts:

Troubleshooting pod creation

Sometimes, of course, things don’t go as you expect. Maybe you’ve got a networking issue, or you’ve mistyped something in your YAML file. You might see an error like this:

In this case, we can see that one of our containers started up just fine, but there was a problem with the other. To track down the problem, we can ask Kubernetes for more information on the Pod:

As you can see, there’s a lot of information here, but we’re most interested in the Events — specifically, once the warnings and errors start showing up. From here I was able to quickly see that I’d forgotten to add the :v1 tag to my image, so by default it was looking for the :latest tag, which didn’t exist.

To fix the problem, I first deleted the Pod, then fixed the YAML file and started again. Instead, I could have fixed the repo so that Kubernetes could find what it was looking for, and it would have continued on as though nothing had happened.

Now that we’ve successfully gotten a Pod running, let’s look at doing the same for a Deployment.

Finally, we’re down to creating the actual Kubernetes Deployment. Before we do that, though, it’s worth understanding what it is we’re actually doing.

Kubernetes, remember, manages container-based applications and services. In the case of a K8s Deployment, you’re creating a set of resources to be managed. For example, where we previously created a single instance of the Pod, we might create a Kubernetes Deployment YAML example to tell Kubernetes to manage a set of replicas of that Pod — literally, a ReplicaSet — to make sure that a certain number of them are always available.

Kubernetes Deployment Use Cases

It’s important to understand why you’d want to use a Kuberenetes Deployment in the first place. Some of these use cases include:

Ensuring availability of a workload : Deployments specify how many copies of a particular workload should always be running, so if a workload dies, Kubernetes will automatically restart it, ensuring that the workload is always available.

Scaling workloads : Kubernetes makes it easy to change how many replicas a Deployment should maintain, making it straightforward to increase or decrease the number of copies running at any given time. It even offers autoscaling!

Managing the state of an application : Deployments can be paused, edited, and rolled back, so you can make changes with a minimum of fuss.

Easily exposing a workload outside the cluster : It might not sound like much, but being able to create a service that connects your application with the outside world with a single command is more than a little convenient.

Now let’s look at actually building Deployments.

Writing a DeploymentSpec

So we might start our Kubernetes Deployment manifest definition YAML like this:

Here we’re specifying the apiVersion and that we want a Deployment. Next we specify the name. We can also specify any other metadata we want, but let’s keep things simple for now.

Finally, we get into the spec. In the Pod spec, we gave information about what actually went into the Pod; we’ll do the same thing here with the Deployment. We’ll start, in this case, by saying that whatever Pods we deploy, we always want to have 2 replicas. You can set this number however you like, of course, and you can also set properties such as the selector that defines the Pods affected by this Deployment, or the minimum number of seconds a pod must be up without any errors before it’s considered “ready”. You can find a full list of the Deployment specification properties in the Kuberenetes v1beta1 API reference.

OK, so now that we know we want 2 replicas, we need to answer the question: “Replicas of what?” They’re defined by templates:

Look familiar? It should; this Kubernetes template is virtually identical to the Pod definition in the previous section, and that’s by design. Templates are simply definitions of objects to be replicated — objects that might, in other circumstances, be created on their own.

The difference here is that we’re specifying how we know what objects are part of this deployment; notice that the Deployment and the template both specify labels of app: web , and that the selector specifies that as the matchLabels .

Now let’s go ahead and rollout the deployment. Add the YAML to a file called deployment.yaml and point Kubernetes at it:

To see how it’s doing, we can check on the deployments list:

As you can see, Kubernetes has started both replicas, but only one is available. You can check the event log by describing the Deployment, as before:

As you can see here, there’s no problem, it just hasn’t finished scaling up yet. Another few seconds, and we can see that both Pods are running:

Updating a deployment

The simplest ways of updating the properties of a deployment involve editing the YAML used to create it. To do that, you’ll want to use apply rather than create when creating the Deployment in the first place, as in:

You can then make changes to the YAML file itself and re-run kubectl apply to, well, apply them.

The other option is to use the kubectl edit command to edit a specific object, as in:

kubectl edit deployment.v1.apps/rss-site

You’ll then see an editor that enables you to edit the actual YAML that defines the Deployment. When you save your changes, they’ll be applied to the live object. For example, you can change the number of replicas, and when you save the definition, Kubernetes will ensure that the proper number of replicas is running.

Other ways for scaling a deployment

You can also scale a deployment directly using kubectl , as in:

kubectl scale deployment.v1.apps/rss-site --replicas=5

You can even tell Kubernetes to scale the Deployment automatically. For example, you can ensure that your pods are never using more than 60% of their available CPU capacity:

kubectl autoscale deployment.v1.apps/rss-site --min=3 --max=20 --cpu-percent=60

Kubernetes provides you with a number of other alternatives for automatically managing Deployments, which we will cover in future updates, so watch this space!

OK, so let’s review. We’ve basically covered these topics:

YAML is a human-readable text-based format that lets you easily specify configuration-type information by using a combination of maps of name-value pairs and lists of items (and nested versions of each).

YAML is the most convenient way to work with Kubernetes objects, and in this article we looked at creating Pods and Deployments.

You can get more information on running (or should-be-running) objects by asking Kubernetes to describe them.

That’s our basic YAML tutorial, with a focus on Deployments. We’re going to be tackling a great deal of Kubernetes-related content and documentation in the coming months, so if there’s something specific you want to hear about, let us know in the comments, or tweet us at @MirantisIT .

Check out Part 2 of this series to learn more about YAML for Kubernetes services, ingress, and more—and watch a recording of author Nick Chase in a webinar on Kubernetes Deployments using YAML.

port mapping kubernetes yaml

Earn industry-leading certifications for Docker, Kubernetes, and OpenStack

port mapping kubernetes yaml

Choose your cloud native journey.

Whatever your role, we’re here to help with open source tools and world-class support.

port mapping kubernetes yaml

Subscribe to our bi-weekly newsletter for exclusive interviews, expert commentary, and thought leadership on topics shaping the cloud native world.

Cloud expertise for financial services.

900 E Hamilton Avenue Suite 650 Campbell, CA 95008 +1-650-963-9828

Privacy Policy

Do Not Sell My Information

UK Modern Slavery Act Statement

  • What is ZeropOps
  • ZeroOps for developers
  • ZeroOps for cloud operators
  • Modern Application Delivery
  • Cloud Platform Operations
  • Lens Desktop
  • amazee.io's Lagoon
  • Mirantis Container Cloud
  • Mirantis Kubernetes Engine
  • Mirantis OpenStack for Kubernetes
  • Mirantis Container Runtime
  • Mirantis Secure Registry
  • Why Mirantis
  • Customer Stories
  • Open Source
  • Sustainability

© 2005 - 2024 Mirantis, Inc. All rights reserved. “Mirantis” and “FUEL” are registered trademarks of Mirantis, Inc. All other trademarks are the property of their respective owners.

  • Book a Demo
  • | Learning Center
  • | Kubernetes Deployment
  • | Kubernetes Deployment YAML: Learn by Example

Kubernetes Deployment YAML: Learn by Example

What is kubernetes deployment yaml.

YAML (which stands for YAML Ain’t Markup Language) is a language used to provide configuration for software, and is the main type of input for Kubernetes configurations. It is human-readable and can be authored in any text editor. 

A Kubernetes user or administrator specifies data in a YAML file, typically to define a Kubernetes object. The YAML configuration is called a “manifest”, and when it is “applied” to a Kubernetes cluster, Kubernetes creates an object based on the configuration.

A Kubernetes Deployment YAML specifies the configuration for a Deployment object—this is a Kubernetes object that can create and update a set of identical pods. Each pod runs specific containers, which are defined in the spec.template field of the YAML configuration. 

The Deployment object not only creates the pods but also ensures the correct number of pods is always running in the cluster, handles scalability, and takes care of updates to the pods on an ongoing basis. All these activities can be configured through fields in the Deployment YAML. 

Below we’ll show several examples that will walk you through the most common options in a Kubernetes Deployment YAML manifest.

Related content: Read our guide to Kubernetes deployment strategies

Kubernetes Deployment YAML Examples

With multiple replicas.

The following YAML configuration creates a Deployment object that runs 5 replicas of an NGINX container.

Important points in this configuration:

  • spec.replicas —specifies how many pods to run
  • strategy.type —specifies which deployment strategy should be used. In this case and in the following examples we select RollingUpdate, which means new versions are rolled out gradually to pods to avoid downtime.
  • spec.template.spec.containers —specifies which container image to run in each of the pods and ports to expose.

With Resource Limits

The following YAML configuration creates a Deployment object similar to the above, but with resource limits.

The spec.containers.resources field specifies:

  • limits —each container should not be allowed to consume more than 200Mi of memory.
  • requests —each container requires 100m of CPU resources and 200Mi of memory on the node

With Health Checks

The following YAML configuration creates a Deployment object that performs a health check on containers by checking for an HTTP response on the root directory.

The template.spec.containers.livenessProbe  field defines what the kubelet should check to ensure that the pod is alive: 

  • httpGet specifies that the kubelet should try a HTTP request on the root of the web server on port 80.
  • periodSeconds specifies how often the kubelet should perform a liveness probe.
  • initialDelaySeconds specifies how long the kubelet should wait after the pod starts, before performing the first probe.

You can also define readiness probes and startup probes—learn more in the Kubernetes documentation .

With Persistent Volumes

The following YAML configuration creates a Deployment object that creates containers that request a PersistentVolume (PV) using a PersistentVolumeClaim (PVC), and mount it on a path within the container.

  • template.spec.volumes —defines a name for the volume, which is referenced below in containers.volumeMounts 
  • template.spec.volumes.persistVolumeClaim —references a PVC. For this to work, you must have some PVs in your cluster and create a PVC object that matches those PVs. You can then reference the existing PVC object here and the pod will attempt to bind to a matching PV.

Learn more about PVs and PVCs in the documentation .

With Affinity Settings

The following YAML configuration creates a Deployment object with affinity criteria that can encourage a pod to schedule on certain types of nodes.

The spec.affinity field defines criteria that can affect whether the pod schedules on a certain node or not:

  • spec.affinity.nodeAffinity —specifies desired criteria of a node which will cause the pod to be scheduled on it
  • spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution —specifies that affinity is relevant when scheduling a new pod, but is ignored when the pod is already running.
  • nodeSelectorTerms —specifies, in this case, that the node needs to have a disk of type SSD for the pod to be scheduled. 

There are many other options, including preferred node affinity, and pod affinity, which means the pod is scheduled based on the criteria of other pods running on the same node. Learn more in the documentation .

Alternatives to the Deployment Object

Two common alternatives to the Kubernetes Deployment object are:

  • DaemonSet —deploys a pod on all cluster nodes or a certain subset of nodes
  • StatefulSet —used for stateful applications. Similar to a Deployment, but each pod is unique and has a persistent identifier.

Let’s see examples of YAML configurations for these two objects. The code is taken from the Kubernetes documentation .

Kubernetes DaemonSet Example YAML

A DaemonSet runs copies of a pod on all cluster nodes, or a selection of nodes within a cluster. Whenever a node is added to the cluster, the DaemonSet controller checks if it is eligible, and if so, runs the pod on it. When a node is removed from the cluster, the pods are moved to garbage collection. Deleting a DaemonSet also results in removal of the pods it created.

The following YAML file shows how to run a DaemonSet that runs fluentd-elasticsearch for logging purposes. 

The important fields of this configuration are:

  • selector —specifies which nodes the pod should run on. In this case we assume that all pods that need the logging component will have the label fluentd-elasticsearch
  • spec.tolerations —tolerations are applied to pods, and allow the pods to schedule on nodes with matching characteristics. In this case we allow the pod to run on a node even if it is a master node.
  • containers, volumes —specifies what pod and storage volumes the DaemonSet should run on each node.

Kubernetes Statefulset Example YAML

A StatefulSet manages a group of pods while maintaining a sticky identity for each pod, with a persistent identifier that remains even if the pod is shut down and restarted. Pods also have PersistentVolumes that can store data that outlines the lifecycle of each individual pod. 

The following example shows a YAML configuration for a headless Service that controls the network domain, and a StatefulSet that runs 3 instances of an NGINX web server.

  • metadata.name —must be a valid DNS subdomain name.
  • .spec.selector.matchLabels and .spec.template.metadata.labels —both of these must match and are referenced by the headless Service to route requests to the application.
  • spec.selector.replicas —specifies that the StatefulSet should run three replicas of the container, each with a unique persistent identifier.
  • spec.template.spec.containers —specifies what NGINX image to run and how it should mount the PersistentVolumes.
  • volumeClaimTemplates —provides persistent storage using the my-storage-class  storage class. In a real environment, your cluster will have one or more storage classes defined by the cluster administrator, which provide different types of persistent storage.

Kubernetes Deployment with Codefresh

The Codefresh Software Delivery Platform, powered by Argo, lets you answer many important questions within your organization, whether you’re a developer or a product manager. For example:

  • What features are deployed right now in any of your environments?
  • What features are waiting in Staging?
  • What features were deployed last Thursday?
  • Where is feature #53.6 in our environment chain?

What’s great is that you can answer all of these questions by viewing one single dashboard. Our applications dashboard shows:

  • Services affected by each deployment
  • The current state of Kubernetes components
  • Deployment history and log of who deployed what and when and the pull request or Jira ticket associated with each deployment

How useful was this post?

Click on a star to rate it!

Average rating 4.2 / 5. Vote count: 10

No votes so far! Be the first to rate this post.

Related Kubernetes Deployment articles

  • Kubernetes Deployment Examples: Create, Update, Rollback & More
  • Kubernetes Deployment: From Basic Strategies to Progressive Delivery
  • Top 6 Kubernetes Deployment Strategies and How to Choose

GoLinuxCloud

kubectl port-forward examples in Kubernetes

We can use kubectl to set up a proxy that will forward all traffic from a local port we specify to a port associated with the Pod we determine. This can be performed using kubectl port-forward command. kubectl port-forward makes a specific Kubernetes API request. That means the system running it needs access to the API server , and any traffic will get tunneled over a single HTTP connection. We use this command to access container content by forwarding one (or more) local ports to a pod. This command is very useful mostly when you would want to troubleshoot a misbehaving pod.

kubectl port-forward syntax

The general syntax to forward ports using kubectl would be as shown below but we will explore all possible options:

Sample command to perform port forwarding on pod :

Sample command to perform port forwarding on deployment :

Sample command to perform port forwarding on replicaset :

Sample command to perform port forwarding on service :

Note that you will not have a Command Prompt back after executing these commands, which is because the command is actively running to keep this particular tunnel we've requested alive. If we cancel or quit the kubectl command, typically by pressing Ctrl + C , then port forwarding will immediately end.

Perform kubectl port-forward in background

To perform kubectl port-forward in background you can append the command with & as shown below

This will print the PID of the process and then send the command to background and you will be able to use the terminal. Once your work is done, you can go ahead and kill the PID of the background process to close the tunnel.

Perform port-forwarding on Pods

Here I will create a simple pod with nginx server running on port 80:

kubectl port-forward examples in Kubernetes

Use kubectl create command to create the pod using the provided YAML file.

List the available pods:

To get more details such IP address of the pod,worker node on which the respective pod is running etc, we will combine the above command with -o wide :

Access nginx server without port forwarding

Now we know that our nginx container is running on worker-2 with an IP address of 10.44.0.1 . But is this IP reachable from the controller node? We are using weave-net CNI and with this network plugin by default the internal network of worker nodes are not accessible by the controller node so we can't connect to the container directly.

kubectl port-forward examples in Kubernetes

But the same IP is reachable from the worker-2 node so we can access our nginx server container directly from worker-2 without performing any port forwarding:

kubectl port-forward examples in Kubernetes

But our requirement is to access this nginx web server from the container on the container node so we can use kubectl port-forward command.

Access nginx server with port forwarding

Method-1: listen on port 8080 locally, forwarding to port 80 in the pod.

In this method we will forward the traffic to port 8080 on localhost (on controller) to port 80 on worker-2 based nginx container. This is forwarding any and all traffic that gets created on your local machine at TCP port 8080 to TCP port 80 on the Pod nginx :

Now we attempt to access our nginx server using a different terminal:

kubectl port-forward examples in Kubernetes

Once done you can either kill the PID of the port-forward command or press ctrl+c on the terminal where kubectl port-forward is running.

Method-2: Listen on port 8080 on all addresses, forwarding to 80 in the pod

In this method we will perform port forwarding to all addresses on the controller node from port 8080 to port 80 in the pod. So you can use any IP address from the controller node with port 8080 to access the nginx container.

Since we are trying to access a new port, so we will need to enable this port in the firewall if we want to access this on external network:

Now we can access our nginx container on external network (here 192.168.0.150 is the IP of my controller node):

kubectl port-forward examples in Kubernetes

Method-3: Listen on a random port locally, forwarding to 80 in the pod

If you don't specify a local port then kubectl will randomly select a port for forwarding. For example, here I have not specified any local port so kubectl has randomly selected port 40159 for the forwarding.

So we can use this port to access the nginx container:

Perform port-forwarding on Kubernetes Deployment Pods

Kubernetes services can be used to expose pods to external network. There are different Service Types available but to perform port forwarding we will use NodePort .

Here I have created a deployment using the following YAML file:

kubectl port-forward examples in Kubernetes

and we will also create a service to expose the deployment pods:

kubectl port-forward examples in Kubernetes

Let us create the deployment and the service:

List the available pods and services:

As you can see under available services, our deployment is accessible over port 31957 so we can use the IP address of the respective worker node i.e. worker-2 where our pod is running along with port 31957 to access the nginx container:

kubectl port-forward examples in Kubernetes

But if you want to perform port forwarding on this pod then you can use service/<service-name> as it would be hard to get the pod name as it is not static and contains variables:

Here since it would be hard to get the pod name as it is not static and contains variables, n you can access the nginx deployment pod using following address from the container:

In this tutorial we learned how to access container in an external network by performing port forwarding using kubectl. The port-forward utility is often used to create a tunnel to a pod inside the cluster. This utility creates a TCP stream to a specific port on your pod, making it accessible to your local host (or more if you wanted). This method is mostly used by developers for easy troubleshooting purpose to access the pod container. It also has some drawbacks as kubectl port-forward will open your cluster up to attacks from any script running locally so you must use this cautiously.

Forward a local port to a port on the Pod

Related Searches: kubectl port-forward in background. kubectl port-forward --address. kubectl port-forward options. run kubectl port-forward in background.

Deepak Prasad

He is the founder of GoLinuxCloud and brings over a decade of expertise in Linux, Python, Go, Laravel, DevOps, Kubernetes, Git, Shell scripting, OpenShift, AWS, Networking, and Security. With extensive experience, he excels in various domains, from development to DevOps, Networking, and Security, ensuring robust and efficient solutions for diverse projects. You can reach out to him on his LinkedIn profile or join on Facebook page.

Can't find what you're searching for? Let us assist you.

Enter your query below, and we'll provide instant results tailored to your needs.

If my articles on GoLinuxCloud has helped you, kindly consider buying me a coffee as a token of appreciation.

Buy GoLinuxCloud a Coffee

For any other feedbacks or questions you can send mail to [email protected]

Thank You for your support!!

Leave a Comment Cancel reply

Save my name and email in this browser for the next time I comment.

Notify me via e-mail if anyone answers my comment.

This website uses cookies. By continuing to browse, you agree to our Privacy Policy.

  • Learning Center

Kubernetes Helm Charts: The Basics and a Quick Tutorial

Guy Menachem

What Are Kubernetes Helm Charts? 

Kubernetes Helm charts are the ‘packages’ of the Kubernetes world, similar to apt, yum, or homebrew for operating systems, or Maven, Gradle, and npm for programming languages. They are bundles of pre-configured Kubernetes resources.

Helm uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. Charts are created as files laid out in a particular directory tree. They can be packaged into versioned archives to be deployed.

Helm charts allow for the simple creation, versioning, sharing, and publishing of applications that are designed to run on Kubernetes clusters. The use of Helm and Helm charts simplifies the process of defining, installing, and upgrading Kubernetes applications.

What Is a Helm Chart Repository? 

A Helm chart repository is a location where packaged charts can be stored and shared. Think of it as a library of applications ready to be deployed on Kubernetes. The repository indexes these packages so that they can be searched using Helm tooling. Technically, Helm repositories are HTTP servers that store packaged charts for distribution.

By default, Helm uses a public chart repository named stable , but you can configure it to use your own chart repository. This is especially useful for teams developing their own applications, as it allows them to store their charts in a central place and share them with others.

The Helm Deployment Process

Deploying applications using Helm involves pulling chart definitions from a repository, customizing them via configuration files, and then deploying the resulting application stack to a Kubernetes cluster: 

  • Creating or selecting Helm chart: The chart includes all the necessary components like deployments, services, and ingress rules that describe how the application should be deployed and managed. 
  • Defining a values.yaml file: You configure a YAML deployment using the values.yaml file. This file overrides default configuration values within the chart to tailor the application deployment to specific requirements, such as setting environment variables, configuring resource limits, or defining service ports.
  • Installing the chart: After configuring the deployment, the helm install command is used to deploy the application to the Kubernetes cluster. This command takes the Helm chart, combines it with the configuration specified in the values.yaml file, and applies it to the cluster, resulting in the creation or update of Kubernetes resources. 
  • Post-deployment: Helm provides commands like helm list to view active releases, helm upgrade to apply updates, and helm rollback to revert to a previous version of the deployment. 

Structure of a Helm Chart 

Chart directory.

The chart directory is the top-level directory, with the same name as the chart, which contains all the files and directories that make up the chart. This directory is the most important one as it’s where all the chart resources are stored.

Within this directory, you’ll find a collection of files and directories that define the chart. These include the Chart.yaml file, a templates directory, and a values.yaml file.

Templates Directory

The templates directory is where Kubernetes resources are defined as templates. These templates are standard Kubernetes manifest files, but with the addition of templating directives from the Go templating language.

This directory contains files that will be transformed into Kubernetes manifest files when the chart is deployed. The templates directory is flexible and can contain Kubernetes manifest files for any type of resource that is supported.

values.yaml

The values.yaml file is a simple, easy-to-read file that is used to customize the behavior of the chart. This file is written in YAML format and contains default configuration values for the chart.

These values can be overridden by the user during installation or upgrade, allowing for extensive customization of the chart’s behavior. In essence, the values.yaml file enables users to tailor deployments to their specific needs.

Here is an example of a values.yaml file:

This file sets default values for a hypothetical application. It specifies the number of replicas (1), the image to be used (in this case, nginx), the service type and port, and resource limits and requests for the application to be deployed.

chart.yaml is a mandatory file that holds meta-information about the chart. It includes details like the chart’s name, its version and description. Here is an example of a chart.yaml file:

This file defines the chart’s name (my-application), its description, the type of chart (application or library), the version of the chart, and the version of the application that the chart is installing.

How to Create and Use Helm Charts in Kubernetes  

Install helm.

Before we start with the installation process, it’s worth noting that Helm is compatible with macOS, Windows, and Linux. For the purpose of this tutorial, we’ll be using a Linux system. To install Helm, open your terminal and run the following commands:

Once you’ve run these commands, you can verify your Helm installation by running helm version . If Helm is successfully installed, you should see the version of your Helm client printed in the terminal.

port mapping kubernetes yaml

Initialize a Helm Chart Repository

To initialize a Helm chart repository, we first need to add the official Helm charts repository, stable . We can do this by running the following command in our terminal:

Once we’ve added the stable repository, we can update our list of charts by running helm repo update . This command will fetch the latest versions of all the charts from the stable repository.

port mapping kubernetes yaml

After we’ve updated our repository, we can search for charts using the helm search repo command. For instance, if we want to search for all charts related to MySQL, we can run helm search repo mysql .

port mapping kubernetes yaml

Install an Example Chart

Now, let’s install an example chart to see Helm in action. For this tutorial, we’ll be using the WordPress chart from the Bitnami public repository. WordPress is a popular open-source content management system, and you can use Helm to automatically deploy it in your Kubernetes cluster.

To install the WordPress chart, run the following command in your terminal:

This command will create a new deployment of WordPress in your Kubernetes cluster. Helm will download the WordPress chart from the repository, generate a set of Kubernetes resources based on the values defined in the chart, and apply these resources to your cluster.

Once the command has completed, you can check the status of your deployment by running helm status my-wordpress . This command will print detailed information about your deployment, including the status of each individual Kubernetes resource that was created by the chart.

Related content: Read our guide to Kubernetes helm tutorial (coming soon)

Visualizing and Managing Helm Charts With Komodor

Komodor’s platform streamlines the day-to-day operations and troubleshooting process of your Kubernetes apps. Specifically when it comes to Helm Charts, Komodor’s platform provides you with a visual dashboard to view the installed Helm charts, see their revision history and corresponding k8s resources. It also allows you to perform simple actions such as rolling back to a revision or upgrading to a newer version.

At its core, the platform gives you a real-time, high-level view of your cluster’s health, configurations, and resource utilization. This abstraction is particularly useful for routine tasks like rolling out updates, scaling applications, and managing resources. You can easily identify bottlenecks, underutilized nodes, or configuration drift, and then make informed decisions without needing to sift through YAML files or execute a dozen kubectl commands.

Beyond just observation, Komodor integrates with your existing CI/CD pipelines and configuration management tools to make routine tasks more seamless. The platform offers a streamlined way to enact changes, such as scaling deployments or updating configurations, directly through its interface. It can even auto-detect and integrate with CD tools like Argo or Flux to support a GitOps approach! Komodor’s “app-centric” approach to Kubernetes management is a game-changer for daily operational tasks, making it easier for both seasoned DevOps engineers and those new to Kubernetes to keep their clusters running smoothly, and their applications maintaining high-availability.

To learn more about how Komodor can make it easier to empower you and your teams to manage & troubleshoot K8s, sign up for our free trial .

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

Latest Articles

Kubernetes load balancer: what are the options, kubernetes sidecar containers: practical guide with examples, kubernetes dns: a beginner’s guide, sign up for free.

and start using Komodor in seconds!

port mapping kubernetes yaml

Our integration with Kubernetes queries your Kubernetes clusters directly according to your definition. By using our Kubernetes integration, you can ingest live data directly from your K8s clusters into Port in a transparent, efficient and precise manner, thus making sure only the information you need appears in the software catalog, and remains up to date.

Our integration with Kubernetes provides real-time event processing, this allows for an accurate real-time representation of your K8s cluster inside Port.

port mapping kubernetes yaml

Port's Kubernetes exporter is open source, view the source code here

💡 Kubernetes exporter common use cases ​

Our Kubernetes exporter makes it easy to fill the software catalog with live data directly from your clusters, for example:

  • Map all the resources in your clusters, including namespaces , pods , replica sets , cluster nodes , deployments and other cluster objects;
  • Get real-time metadata from your cluster such as replica counts , deployment health , node health and more;
  • Use relations to create a complete, easily digestible map of your K8s cluster inside Port;
  • Map your Kubernetes resources from common CRDs such as ArgoCD, Istio and more;

How it works ​

Port's Kubernetes exporter allows you to bring all the data supported by the K8s API to show running services, environments and more. The open source Kubernetes exporter allows you to perform extract, transform, load (ETL) on data from K8s into the desired software catalog data model.

The exporter is deployed using a Helm chart installed on the cluster. Once it is set up, it continues to sync changes, meaning that all changes, deletions or additions are accurately and automatically reflected in Port.

The helm chart uses a YAML configuration stored in the integration within your Portal. This configuration describes the ETL process responsible for loading data into the developer portal. The approach reflects a golden middle between an overly opinionated K8s visualization that might not work for everyone and a too-broad approach that could introduce unneeded complexity into the developer portal.

Here is an example snippet from the integration configuration which demonstrates the ETL process for getting ReplicaSet data from the cluster and into the software catalog:

port mapping kubernetes yaml

The exporter makes use of the JQ JSON processor to select, modify, concatenate, transform and perform other operations on existing fields and values from the Kubernetes objects.

Exporter JQ configuration ​

The exporter configuration is how you specify the exact resources you want to query from your K8s cluster, and also how you specify which entities and which properties you want to fill with data from the cluster.

Here is an example configuration block:

Exporter configuration structure ​

The root key of the configuration YAML is the resources key:

The kind key is a specifier for an object from the K8s API or CRD following the group/version/resource (G/V/R) format:

A reference of available Kubernetes Resources to list, watch, and export can be found here

The selector and the query keys let you filter exactly which objects from the specified kind will be ingested to the software catalog

Some example use cases:

To sync all objects from the specified kind : do not specify a selector and query key;

To sync all objects from the specified kind that are not related to the internal Kubernetes system, use:

To sync all objects from the specified kind that start with production , use:

The port , entity and the mappings keys open the section used to map the Kubernetes object fields to Port entities, the mappings key is an array where each object matches the structure of an entity

Prerequisites ​

  • Port's Kubernetes exporter is installed using Helm , so Helm must be installed to use the exporter's chart. Please refer to Helm's documentation for installation instructions;
  • You will need your Port credentials to install the Kubernetes exporter.

To get your Port API credentials go to your Port application , click on the ... button in the top right corner, and select Credentials . Here you can view and copy your CLIENT_ID and CLIENT_SECRET :

port mapping kubernetes yaml

The exporter helm chart can be found here

Installation ​

Choose one of the following installation methods:

Add Port's Helm repo by using the following command:

If you already added Port's Helm repo earlier, run helm repo update to retrieve the latest versions of the charts. You can then run helm search repo port-labs to see the charts.

Install the exporter service on your Kubernetes cluster by running the following command:

Install the my-port-k8s-exporter ArgoCD Application by creating the following my-port-k8s-exporter.yaml manifest:

Remember to replace the placeholders for LATEST_HELM_RELEASE YOUR_PORT_CLIENT_ID YOUR_PORT_CLIENT_SECRET and YOUR_GIT_REPO_URL .

You can find the latest version port-k8s-exporter chart in our Releases page.

Multiple sources ArgoCD documentation can be found here .

Apply your application manifest with kubectl :

By default, the exporter will try to initiate pre-defined blueprints and resource mapping.

Done! The exporter will begin creating and updating objects from Kubernetes cluster as Port entities shortly.

Updating exporter configuration ​

To update the exporter resource mapping, open the data sources page in Port and click on your Kubernetes integration. Then edit the exporter configuration and click on the Save & Resync button.

Refer to the examples page for practical configurations and their corresponding blueprint definitions.

Refer to the advanced page for advanced use cases and outputs.

  • 💡 Kubernetes exporter common use cases
  • Exporter JQ configuration
  • Exporter configuration structure
  • Prerequisites
  • Updating exporter configuration

Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

User-friendly WebUI for LLMs (Formerly Ollama WebUI)

open-webui/open-webui

Folders and files, repository files navigation, open webui (formerly ollama webui) 👋.

GitHub stars

User-friendly WebUI for LLMs, supported LLM runners include Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation .

🖥️ Intuitive Interface : Our chat interface takes inspiration from ChatGPT, ensuring a user-friendly experience.

📱 Responsive Design : Enjoy a seamless experience on both desktop and mobile devices.

⚡ Swift Responsiveness : Enjoy fast and responsive performance.

🚀 Effortless Setup : Install seamlessly using Docker or Kubernetes (kubectl, kustomize or helm) for a hassle-free experience.

💻 Code Syntax Highlighting : Enjoy enhanced code readability with our syntax highlighting feature.

✒️🔢 Full Markdown and LaTeX Support : Elevate your LLM experience with comprehensive Markdown and LaTeX capabilities for enriched interaction.

📚 Local RAG Integration : Dive into the future of chat interactions with the groundbreaking Retrieval Augmented Generation (RAG) support. This feature seamlessly integrates document interactions into your chat experience. You can load documents directly into the chat or add files to your document library, effortlessly accessing them using # command in the prompt. In its alpha phase, occasional issues may arise as we actively refine and enhance this feature to ensure optimal performance and reliability.

🌐 Web Browsing Capability : Seamlessly integrate websites into your chat experience using the # command followed by the URL. This feature allows you to incorporate web content directly into your conversations, enhancing the richness and depth of your interactions.

📜 Prompt Preset Support : Instantly access preset prompts using the / command in the chat input. Load predefined conversation starters effortlessly and expedite your interactions. Effortlessly import prompts through Open WebUI Community integration.

👍👎 RLHF Annotation : Empower your messages by rating them with thumbs up and thumbs down, facilitating the creation of datasets for Reinforcement Learning from Human Feedback (RLHF). Utilize your messages to train or fine-tune models, all while ensuring the confidentiality of locally saved data.

🏷️ Conversation Tagging : Effortlessly categorize and locate specific chats for quick reference and streamlined data collection.

📥🗑️ Download/Delete Models : Easily download or remove models directly from the web UI.

⬆️ GGUF File Model Creation : Effortlessly create Ollama models by uploading GGUF files directly from the web UI. Streamlined process with options to upload from your machine or download GGUF files from Hugging Face.

🤖 Multiple Model Support : Seamlessly switch between different chat models for diverse interactions.

🔄 Multi-Modal Support : Seamlessly engage with models that support multimodal interactions, including images (e.g., LLava).

🧩 Modelfile Builder : Easily create Ollama modelfiles via the web UI. Create and add characters/agents, customize chat elements, and import modelfiles effortlessly through Open WebUI Community integration.

⚙️ Many Models Conversations : Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.

💬 Collaborative Chat : Harness the collective intelligence of multiple models by seamlessly orchestrating group conversations. Use the @ command to specify the model, enabling dynamic and diverse dialogues within your chat interface. Immerse yourself in the collective intelligence woven into your chat environment.

🤝 OpenAI API Integration : Effortlessly integrate OpenAI-compatible API for versatile conversations alongside Ollama models. Customize the API Base URL to link with LMStudio, Mistral, OpenRouter, and more .

🔄 Regeneration History Access : Easily revisit and explore your entire regeneration history.

📜 Chat History : Effortlessly access and manage your conversation history.

📤📥 Import/Export Chat History : Seamlessly move your chat data in and out of the platform.

🗣️ Voice Input Support : Engage with your model through voice interactions; enjoy the convenience of talking to your model directly. Additionally, explore the option for sending voice input automatically after 3 seconds of silence for a streamlined experience.

⚙️ Fine-Tuned Control with Advanced Parameters : Gain a deeper level of control by adjusting parameters such as temperature and defining your system prompts to tailor the conversation to your specific preferences and needs.

🔗 External Ollama Server Connection : Seamlessly link to an external Ollama server hosted on a different address by configuring the environment variable.

🔐 Role-Based Access Control (RBAC) : Ensure secure access with restricted permissions; only authorized individuals can access your Ollama, and exclusive model creation/pulling rights are reserved for administrators.

🔒 Backend Reverse Proxy Support : Bolster security through direct communication between Open WebUI backend and Ollama. This key feature eliminates the need to expose Ollama over LAN. Requests made to the '/ollama/api' route from the web UI are seamlessly redirected to Ollama from the backend, enhancing overall system security.

🌟 Continuous Updates : We are committed to improving Open WebUI with regular updates and new features.

🔗 Also Check Out Open WebUI Community!

Don't forget to explore our sibling project, Open WebUI Community , where you can discover, download, and explore customized Modelfiles. Open WebUI Community offers a wide range of exciting possibilities for enhancing your chat interactions with Open WebUI! 🚀

How to Install 🚀

Please note that for certain Docker environments, additional configurations might be needed. If you encounter any connection issues, our detailed guide on Open WebUI Documentation is ready to assist you.

Quick Start with Docker 🐳

When using Docker to install Open WebUI, make sure to include the -v open-webui:/app/backend/data in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.

If Ollama is on your computer , use this command:

If Ollama is on a Different Server , use this command:

To connect to Ollama on another server, change the OLLAMA_API_BASE_URL to the server's URL:

After installation, you can access Open WebUI at http://localhost:3000 . Enjoy! 😄

Open WebUI: Server Connection Error

If you're experiencing connection issues, it’s often due to the WebUI docker container not being able to reach the Ollama server at 127.0.0.1:11434 (host.docker.internal:11434) inside the container . Use the --network=host flag in your docker command to resolve this. Note that the port changes from 3000 to 8080, resulting in the link: http://localhost:8080 .

Example Docker Command :

Other Installation Methods

We offer various installation alternatives, including non-Docker methods, Docker Compose, Kustomize, and Helm. Visit our Open WebUI Documentation or join our Discord community for comprehensive guidance.

Troubleshooting

Encountering connection issues? Our Open WebUI Documentation has got you covered. For further assistance and to join our vibrant community, visit the Open WebUI Discord .

Keeping Your Docker Installation Up-to-Date

In case you want to update your local Docker installation to the latest version, you can do it with Watchtower :

In the last part of the command, replace open-webui with your container name if it is different.

Moving from Ollama WebUI to Open WebUI

Check our Migration Guide available in our Open WebUI Documentation .

What's Next? 🌟

Discover upcoming features on our roadmap in the Open WebUI Documentation .

Supporters ✨

A big shoutout to our amazing supporters who's helping to make this project possible! 🙏

Platinum Sponsors 🤍

  • We're looking for Sponsors!

Acknowledgments

Special thanks to Prof. Lawrence Kim and Prof. Nick Vincent for their invaluable support and guidance in shaping this project into a research endeavor. Grateful for your mentorship throughout the journey! 🙌

This project is licensed under the MIT License - see the LICENSE file for details. 📄

If you have any questions, suggestions, or need assistance, please open an issue or join our Open WebUI Discord community to connect with us! 🤝

Created by Timothy J. Baek - Let's make Open Web UI even more amazing together! 💪

Security policy

Sponsor this project, contributors 58.

@tjbck

  • Svelte 67.2%
  • Python 18.2%
  • TypeScript 10.7%
  • Dockerfile 0.4%

You are using an outdated browser. Please upgrade your browser to improve your experience.

Define a workload type that exposes server workloads outside the cluster

Tanzu Application Platform (commonly known as TAP) allows you to create new workload types. You start by adding an Ingress resource to the server-template ClusterConfigTemplate when this new type of workload is created.

Delete the Ingress resource previously created.

Install the yq CLI on your local machine.

Save the existing server-template in a local file by running:

Extract the .spec.ytt field from this file and create another file by running:

In the next step, you add the Ingress resource snippet to spec-ytt.yaml . This step provides a sample Ingress resource snippet. Make the following edits before adding the Ingress resource snippet to spec-ytt.yaml :

  • Replace INGRESS-DOMAIN with the Ingress domain you set during the installation.
  • Set the annotation cert-manager.io/cluster-issuer to the shared.ingress_issuer value configured during installation or leave it as tap-ingress-selfsigned to use the default one.
  • This configuration is based on your workload service running on port 8080 .

The Ingress resource snippet looks like this:

Add the Ingress resource snippet to the spec-ytt.yaml file and save. Look for the Service resource, and insert the snippet before the last #@ end . For example:

Add the snippet to the .spec.ytt property in secure-server-template.yaml :

Change the name of the ClusterConfigTemplate to secure-server-template by running:

Create the new ClusterConfigTemplate by running:

Verify the new ClusterConfigTemplate is in the cluster by running:

Expected output:

Add the new workload type to the tap-values.yaml . The new workload type is named secure-server and the cluster_config_template_name is secure-server-template .

Update your Tanzu Application Platform installation as follows:

Give privileges to the deliverable role to manage Ingress resources:

Update the workload type to secure-server :

Note If you created the Ingress resource manually in the previous section, delete it before this.

After the process finishes, verify that the resources Deployment, Service, and Ingress appear by running:

Access your secure-server workload with HTTPS by running:

Containerize a Java application

Prerequisites.

  • You have installed the latest version of Docker Desktop . Docker adds new features regularly and some parts of this guide may work only with the latest version of Docker Desktop.
  • You have a Git client . The examples in this section use a command-line based Git client, but you can use any client.

This section walks you through containerizing and running a Java application.

Get the sample applications

Clone the sample application that you'll be using to your local development machine. Run the following command in a terminal to clone the repository.

The sample application is a Spring Boot application built using Maven. For more details, see readme.md in the repository.

Initialize Docker assets

Now that you have an application, you can use docker init to create the necessary Docker assets to containerize your application. Inside the spring-petclinic directory, run the docker init command in a terminal. docker init provides some default configuration, but you'll need to answer a few questions about your application. Use the answers in the following example in order to follow along with this guide.

The sample application already contains Docker assets. You'll be prompted to overwrite the existing Docker assets. To continue with this guide, select y to overwrite them.

In the previous example, notice the WARNING . docker-compose.yaml already exists, so docker init overwrites that file rather than creating a new compose.yaml file. This prevents having multiple Compose files in the directory. Both names are supported, but Compose prefers the canonical compose.yaml .

You should now have the following three new files in your spring-petclinic directory.

  • .dockerignore
  • docker-compose.yaml

You can open the files in a code or text editor, then read the comments to learn more about the instructions, or visit the links in the previous list.

Run the application

Inside the spring-petclinic directory, run the following command in a terminal.

The first time you build and run the app, Docker downloads dependencies and builds the app. It may take several minutes depending on your network connection.

Open a browser and view the application at http://localhost:8080 . You should see a simple app for a pet clinic.

In the terminal, press ctrl + c to stop the application.

Run the application in the background

You can run the application detached from the terminal by adding the -d option. Inside the docker-php-sample directory, run the following command in a terminal.

In the terminal, run the following command to stop the application.

For more information about Compose commands, see the Compose CLI reference .

In this section, you learned how you can containerize and run a Java application using Docker.

Related information:

  • docker init reference

In the next section, you'll learn how you can develop your application using Docker containers.

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.

A Pod (as in a pod of whales or pea pod) is a group of one or more containers , with shared storage and network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.

As well as application containers, a Pod can contain init containers that run during Pod startup. You can also inject ephemeral containers for debugging a running Pod.

What is a Pod?

The shared context of a Pod is a set of Linux namespaces, cgroups, and potentially other facets of isolation - the same things that isolate a container . Within a Pod's context, the individual applications may have further sub-isolations applied.

A Pod is similar to a set of containers with shared namespaces and shared filesystem volumes.

Pods in a Kubernetes cluster are used in two main ways:

  • Pods that run a single container . The "one-container-per-Pod" model is the most common Kubernetes use case; in this case, you can think of a Pod as a wrapper around a single container; Kubernetes manages Pods rather than managing the containers directly.

Pods that run multiple containers that need to work together . A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit.

Grouping multiple co-located and co-managed containers in a single Pod is a relatively advanced use case. You should use this pattern only in specific instances in which your containers are tightly coupled.

You don't need to run multiple containers to provide replication (for resilience or capacity); if you need multiple replicas, see Workload management .

The following is an example of a Pod which consists of a container running the image nginx:1.14.2 .

To create the Pod shown above, run the following command:

Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.

Workload resources for managing pods

Usually you don't need to create Pods directly, even singleton Pods. Instead, create them using workload resources such as Deployment or Job . If your Pods need to track state, consider the StatefulSet resource.

Each Pod is meant to run a single instance of a given application. If you want to scale your application horizontally (to provide more overall resources by running more instances), you should use multiple Pods, one for each instance. In Kubernetes, this is typically referred to as replication . Replicated Pods are usually created and managed as a group by a workload resource and its controller .

See Pods and controllers for more information on how Kubernetes uses workload resources, and their controllers, to implement application scaling and auto-healing.

Pods natively provide two kinds of shared resources for their constituent containers: networking and storage .

Working with Pods

You'll rarely create individual Pods directly in Kubernetes—even singleton Pods. This is because Pods are designed as relatively ephemeral, disposable entities. When a Pod gets created (directly by you, or indirectly by a controller ), the new Pod is scheduled to run on a Node in your cluster. The Pod remains on that node until the Pod finishes execution, the Pod object is deleted, the Pod is evicted for lack of resources, or the node fails.

The name of a Pod must be a valid DNS subdomain value, but this can produce unexpected results for the Pod hostname. For best compatibility, the name should follow the more restrictive rules for a DNS label .

You should set the .spec.os.name field to either windows or linux to indicate the OS on which you want the pod to run. These two are the only operating systems supported for now by Kubernetes. In future, this list may be expanded.

In Kubernetes v1.29, the value you set for this field has no effect on scheduling of the pods. Setting the .spec.os.name helps to identify the pod OS authoritatively and is used for validation. The kubelet refuses to run a Pod where you have specified a Pod OS, if this isn't the same as the operating system for the node where that kubelet is running. The Pod security standards also use this field to avoid enforcing policies that aren't relevant to that operating system.

Pods and controllers

You can use workload resources to create and manage multiple Pods for you. A controller for the resource handles replication and rollout and automatic healing in case of Pod failure. For example, if a Node fails, a controller notices that Pods on that Node have stopped working and creates a replacement Pod. The scheduler places the replacement Pod onto a healthy Node.

Here are some examples of workload resources that manage one or more Pods:

  • StatefulSet

Pod templates

Controllers for workload resources create Pods from a pod template and manage those Pods on your behalf.

PodTemplates are specifications for creating Pods, and are included in workload resources such as Deployments , Jobs , and DaemonSets .

Each controller for a workload resource uses the PodTemplate inside the workload object to make actual Pods. The PodTemplate is part of the desired state of whatever workload resource you used to run your app.

The sample below is a manifest for a simple Job with a template that starts one container. The container in that Pod prints a message then pauses.

Modifying the pod template or switching to a new pod template has no direct effect on the Pods that already exist. If you change the pod template for a workload resource, that resource needs to create replacement Pods that use the updated template.

For example, the StatefulSet controller ensures that the running Pods match the current pod template for each StatefulSet object. If you edit the StatefulSet to change its pod template, the StatefulSet starts to create new Pods based on the updated template. Eventually, all of the old Pods are replaced with new Pods, and the update is complete.

Each workload resource implements its own rules for handling changes to the Pod template. If you want to read more about StatefulSet specifically, read Update strategy in the StatefulSet Basics tutorial.

On Nodes, the kubelet does not directly observe or manage any of the details around pod templates and updates; those details are abstracted away. That abstraction and separation of concerns simplifies system semantics, and makes it feasible to extend the cluster's behavior without changing existing code.

Pod update and replacement

As mentioned in the previous section, when the Pod template for a workload resource is changed, the controller creates new Pods based on the updated template instead of updating or patching the existing Pods.

Kubernetes doesn't prevent you from managing Pods directly. It is possible to update some fields of a running Pod, in place. However, Pod update operations like patch , and replace have some limitations:

Most of the metadata about a Pod is immutable. For example, you cannot change the namespace , name , uid , or creationTimestamp fields; the generation field is unique. It only accepts updates that increment the field's current value.

If the metadata.deletionTimestamp is set, no new entry can be added to the metadata.finalizers list.

Pod updates may not change fields other than spec.containers[*].image , spec.initContainers[*].image , spec.activeDeadlineSeconds or spec.tolerations . For spec.tolerations , you can only add new entries.

When updating the spec.activeDeadlineSeconds field, two types of updates are allowed:

  • setting the unassigned field to a positive number;
  • updating the field from a positive number to a smaller, non-negative number.

Resource sharing and communication

Pods enable data sharing and communication among their constituent containers.

Storage in Pods

A Pod can specify a set of shared storage volumes . All containers in the Pod can access the shared volumes, allowing those containers to share data. Volumes also allow persistent data in a Pod to survive in case one of the containers within needs to be restarted. See Storage for more information on how Kubernetes implements shared storage and makes it available to Pods.

Pod networking

Each Pod is assigned a unique IP address for each address family. Every container in a Pod shares the network namespace, including the IP address and network ports. Inside a Pod (and only then), the containers that belong to the Pod can communicate with one another using localhost . When containers in a Pod communicate with entities outside the Pod , they must coordinate how they use the shared network resources (such as ports). Within a Pod, containers share an IP address and port space, and can find each other via localhost . The containers in a Pod can also communicate with each other using standard inter-process communications like SystemV semaphores or POSIX shared memory. Containers in different Pods have distinct IP addresses and can not communicate by OS-level IPC without special configuration. Containers that want to interact with a container running in a different Pod can use IP networking to communicate.

Containers within the Pod see the system hostname as being the same as the configured name for the Pod. There's more about this in the networking section.

Privileged mode for containers

Any container in a pod can run in privileged mode to use operating system administrative capabilities that would otherwise be inaccessible. This is available for both Windows and Linux.

Linux privileged containers

In Linux, any container in a Pod can enable privileged mode using the privileged (Linux) flag on the security context of the container spec. This is useful for containers that want to use operating system administrative capabilities such as manipulating the network stack or accessing hardware devices.

Windows privileged containers

In Windows, you can create a Windows HostProcess pod by setting the windowsOptions.hostProcess flag on the security context of the pod spec. All containers in these pods must run as Windows HostProcess containers. HostProcess pods run directly on the host and can also be used to perform administrative tasks as is done with Linux privileged containers.

Static Pods

Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them. Whereas most Pods are managed by the control plane (for example, a Deployment ), for static Pods, the kubelet directly supervises each static Pod (and restarts it if it fails).

Static Pods are always bound to one Kubelet on a specific node. The main use for static Pods is to run a self-hosted control plane: in other words, using the kubelet to supervise the individual control plane components .

The kubelet automatically tries to create a mirror Pod on the Kubernetes API server for each static Pod. This means that the Pods running on a node are visible on the API server, but cannot be controlled from there. See the guide Create static Pods for more information.

Pods with multiple containers

Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service. The containers in a Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster. The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated.

  • Pods that run multiple containers that need to work together . A Pod can encapsulate an application composed of multiple co-located containers that are tightly coupled and need to share resources. These co-located containers form a single cohesive unit of service—for example, one container serving data stored in a shared volume to the public, while a separate sidecar container refreshes or updates those files. The Pod wraps these containers, storage resources, and an ephemeral network identity together as a single unit.

For example, you might have a container that acts as a web server for files in a shared volume, and a separate sidecar container that updates those files from a remote source, as in the following diagram:

Some Pods have init containers as well as app containers . By default, init containers run and complete before the app containers are started.

You can also have sidecar containers that provide auxiliary services to the main application Pod (for example: a service mesh).

Enabled by default, the SidecarContainers feature gate allows you to specify restartPolicy: Always for init containers. Setting the Always restart policy ensures that the containers where you set it are treated as sidecars that are kept running during the entire lifetime of the Pod. Containers that you explicitly define as sidecar containers start up before the main application Pod and remain running until the Pod is shut down.

Container probes

A probe is a diagnostic performed periodically by the kubelet on a container. To perform a diagnostic, the kubelet can invoke different actions:

  • ExecAction (performed with the help of the container runtime)
  • TCPSocketAction (checked directly by the kubelet)
  • HTTPGetAction (checked directly by the kubelet)

You can read more about probes in the Pod Lifecycle documentation.

What's next

  • Learn about the lifecycle of a Pod .
  • Learn about RuntimeClass and how you can use it to configure different Pods with different container runtime configurations.
  • Read about PodDisruptionBudget and how you can use it to manage application availability during disruptions.
  • Pod is a top-level resource in the Kubernetes REST API. The Pod object definition describes the object in detail.
  • The Distributed System Toolkit: Patterns for Composite Containers explains common layouts for Pods with more than one container.
  • Read about Pod topology spread constraints

To understand the context for why Kubernetes wraps a common Pod API in other resources (such as StatefulSets or Deployments ), you can read about the prior art, including:

  • Tupperware .

Was this page helpful?

Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow . Open an issue in the GitHub Repository if you want to report a problem or suggest an improvement .

Kubernetes Events事件收集与监控实战

作者头像

大家好,我是安若,前两天群里的小伙伴问到Kubernetes的Event事件收集、监控告警该如何进行,那么这次就乘此机会分享一下当前使用的方案。仅供参考哦,当然如果有任何问题可以公众号回复加群一起交流一下。

本次仅分享events展示,并没有涉及到告警相关的,等下次有机会了再次分享一下吧。

port mapping kubernetes yaml

这里的词云没有展示出来,因为需要安装插件,可自行进行安装配置。

port mapping kubernetes yaml

执行以上yaml文件,这里就直接略过了。不会自行百度。

Elasticsearch部署

  • 下载elasticsearch压缩包
  • 修改配置文件 config/elasticsearch.yml
  • 创建systemctl启动配置
关于如何reload、启动这里就直接略过了哦。不会自行百度。

重置elasticsearch密码

当我们为elastic用户重置密码时,输入确定后,即可生成一个新的密码,这里的密码为:l5tL-0v74o15RlMzVkY
  • 下载Grafana压缩包
我这里直接使用的nohup启动,当然你也可以在kubernetes、docker中部署等等。
  • Grafana连接Elastic

port mapping kubernetes yaml

其中这里的ca是elasitc中的,具体路径为config/certs/http_ca.crt ,这里的Password为我们重置后的es密码。

port mapping kubernetes yaml

这里就不做过多的解释了。不明白的,加群问吧。
  • 导入dashboard

port mapping kubernetes yaml

本文分享自 云原生运维圈 微信公众号, 前往查看

如有侵权,请联系 [email protected] 删除。

本文参与  腾讯云自媒体分享计划   ,欢迎热爱写作的你一起参与!

 alt=

Copyright © 2013 - 2024 Tencent Cloud. All Rights Reserved. 腾讯云 版权所有 

深圳市腾讯计算机系统有限公司 ICP备案/许可证号: 粤B2-20090059  深公网安备号 44030502008569

腾讯云计算(北京)有限责任公司 京ICP证150476号 |   京ICP备11018762号 | 京公网安备号11010802020287

Copyright © 2013 - 2024 Tencent Cloud.

All Rights Reserved. 腾讯云 版权所有

IMAGES

  1. Kubernetes Deployment Tutorial

    port mapping kubernetes yaml

  2. How to Write YAML for Kubernetes

    port mapping kubernetes yaml

  3. Explained: Kubernetes Service Ports

    port mapping kubernetes yaml

  4. YAML basics in Kubernetes

    port mapping kubernetes yaml

  5. Kubernetes Network Policy Tutorial

    port mapping kubernetes yaml

  6. Kubernetes 101 : The Nodeport service on a single or multiple hosts

    port mapping kubernetes yaml

VIDEO

  1. Docker port mapping

  2. Does your province have a port? #mapping #map

  3. Session 3. Docker and Kubernetes

  4. Kubernetes

  5. Kubernetes Services Explained in Detail

  6. Port mapping en docker parte II. #docker #programacion #programming #webservice

COMMENTS

  1. portforwarding

    1 Answer Sorted by: 24 I found that I needed to add some arguments to my second command: kubectl expose deployment test-app --type="LoadBalancer" --target-port=3000 --port=80 This creates a service which directs incoming http traffic (on port 80) to its pods on port 3000.

  2. Ports and Protocols

    When running Kubernetes in an environment with strict network boundaries, such as on-premises datacenter with physical network firewalls or Virtual Networks in Public Cloud, it is useful to be aware of the ports and protocols used by Kubernetes components. Control plane Protocol Direction Port Range Purpose Used By TCP Inbound 6443 Kubernetes API server All TCP Inbound 2379-2380 etcd server ...

  3. How to deploy docker container and do port mapping/forward using

    Please help me to convert the below docker Command to Kubernetes YAML file along with port mapping/forwarding to the docker container # docker run -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14268:14268 jaegertracing/all-in-one:latest I tried the configuration below: enter image description here

  4. Service

    In Kubernetes, a Service is a method for exposing a network application that is running as one or more Pods in your cluster. A key aim of Services in Kubernetes is that you don't need to modify your existing application to use an unfamiliar service discovery mechanism.

  5. Explained: Kubernetes Service Ports

    The YAML defines two ports: port targetPort port is the stable port the Service exposes inside the cluster — other Pods in the cluster send traffic to this port (8080 in our example). targetPort is the port that the application listens on in the Pods/containers.

  6. Kubernetes 101 for developers: Names, ports, YAML files, and more

    Imagine you're running multiple services in Kubernetes, and those services use network ports for communications—a Web API or a database, for example. Kubernetes will automatically allow you to use the same port number for multiple services. This is fantastic for developers. You don't need to remember that "this API uses port 8080, this other ...

  7. Use Port Forwarding to Access Applications in a Cluster

    You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts.

  8. Port, TargetPort, and NodePort

    There are several different port declaration fields in Kubernetes. This is a quick overview of each type, and what each means in your Kubernetes YAML. Pod port list This array, defined in pod.spec.containers [].ports, provides a list of ports that get exposed by the container.

  9. kubectl port-forward: Kubernetes Port Forwarding Guide

    Introduction. The kubectl port-forward command allows you to access internal Kubernetes cluster processes within your local network. This method helps troubleshoot issues and set up services locally without exposing them. Kubectl is the principal command-line tool for managing Kubernetes clusters. The tool is essential for deploying applications, administering cluster resources, and building ...

  10. Port Forwarding

    Skaffold has built-in support for forwarding ports from exposed Kubernetes resources on your cluster to your local machine when running in dev, debug, deploy, or run modes. Automatic Port Forwarding Skaffold supports automatic port forwarding the following classes of resources: user: explicit port-forwards defined in the skaffold.yaml (called user-defined port forwards) services: ports exposed ...

  11. Using Kubernetes Port, TargetPort, and NodePort

    Port configurations for Kubernetes Services In Kubernetes there are several different port configurations for Kubernetes services: Port exposes the Kubernetes service on the specified port within the cluster. Other pods within the cluster can communicate with this server on the specified port.

  12. Introduction to YAML: Creating a Kubernetes deployment

    When defining a Kubernetes manifest, YAML gives you a number of advantages, including: Convenience: You'll no longer have to add all of your parameters to the command line Maintenance: YAML files can be added to source control, such as a Github repository so you can track changes

  13. Kubernetes Deployment YAML: Learn by Example

    A Kubernetes Deployment YAML specifies the configuration for a Deployment object—this is a Kubernetes object that can create and update a set of identical pods. Each pod runs specific containers, which are defined in the spec.template field of the YAML configuration. The Deployment object not only creates the pods but also ensures the correct ...

  14. kubectl port-forward examples in Kubernetes

    Once done you can either kill the PID of the port-forward command or press ctrl+c on the terminal where kubectl port-forward is running. Perform port-forwarding on Kubernetes Deployment Pods. Kubernetes services can be used to expose pods to external network. There are different Service Types available but to perform port forwarding we will use ...

  15. Kubernetes Helm Charts: The Basics and a Quick Tutorial

    The templates directory is flexible and can contain Kubernetes manifest files for any type of resource that is supported. values.yaml. The values.yaml file is a simple, easy-to-read file that is used to customize the behavior of the chart. This file is written in YAML format and contains default configuration values for the chart.

  16. Simplifying Azure Kubernetes Service Authentication Part 2

    Welcome to the second installment of our multipart series on simplifying Azure Kubernetes Service (AKS) authentication. In this article, we delve deeper into the intricacies of AKS setup, focusing on critical aspects such as deploying demo applications, configuring Cert Manager for TLS certificates (enabling HTTPS), establishing a static IP address, creating a DNS label, and initiating the ...

  17. Kubernetes

    The port, entity and the mappings keys open the section used to map the Kubernetes object fields to Port entities, the mappings key is an array where each object matches the structure of an entity. resources:-kind: apps/v1/replicasets ... Install the my-port-k8s-exporter ArgoCD Application by creating the following my-port-k8s-exporter.yaml ...

  18. GitHub

    🖥️ Intuitive Interface: Our chat interface takes inspiration from ChatGPT, ensuring a user-friendly experience.. 📱 Responsive Design: Enjoy a seamless experience on both desktop and mobile devices.. ⚡ Swift Responsiveness: Enjoy fast and responsive performance.. 🚀 Effortless Setup: Install seamlessly using Docker or Kubernetes (kubectl, kustomize or helm) for a hassle-free experience.

  19. Connecting Applications with Services

    POD_IP [map[ip:10.244.3.4]] [map[ip:10.244.2.5]] You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any special NAT rules to route traffic to the pod.

  20. How do I set these docker-compose ports in a kubernetes yaml file?

    You do not expose a port using Pod/deployment yaml. Services are the way to do it. Here you can either use multiple services on top of your pod/deployment but this will result in multiple IP addresses. Other way is to name each port and then create a multi port service definition.

  21. Define a workload type that exposes server workloads outside the cluster

    kubectl get ClusterConfigTemplate server-template -o yaml > secure-server-template.yaml Extract the .spec.ytt field from this file and create another file by running: yq eval '.spec.ytt' secure-server-template.yaml > spec-ytt.yaml In the next step, you add the Ingress resource snippet to spec-ytt.yaml.

  22. Containerize a Java application

    In the previous example, notice the WARNING.docker-compose.yaml already exists, so docker init overwrites that file rather than creating a new compose.yaml file. This prevents having multiple Compose files in the directory. Both names are supported, but Compose prefers the canonical compose.yaml. You should now have the following three new files in your spring-petclinic directory.

  23. Pods

    Pods. Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.. A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers.A Pod's contents are always co-located and co-scheduled, and run in a shared context.

  24. How to use the Secrets Store CSI driver to mount secrets to Kubernetes

    The Kubernetes auth method can be used to authenticate with the vault using the Kubernetes service account token. This will help the vault to inject the vault token into the Kubernetes Pod. For this let us enable the vault Kubernetes backend by exec into the pod, since the token_reviewer_jwt has to be passed from the vault pod.

  25. kubernetes

    type: ClusterIP selector: application: oms ports: - name: s-port port: 9780 - name: b-port port: 8780 Front end POD should be able to connect to Backend POD using service. Should we replace hostname with service name to connect from Frontend POD to Backend POD ?

  26. Kubernetes Events事件收集与监控实战

    腾讯云容器服务(Tencent Kubernetes Engine, TKE)基于原生 kubernetes 提供以容器为核心的、高度可扩展的高性能容器管理服务,覆盖 Serverless、边缘计算、分布式云等多种业务部署场景,业内首创单个集群兼容多种计算节点的容器资源管理模式。

  27. Understanding kubernetes port forwarding, port and targetPort

    IP: 10.1.2.39 Controlled By: ReplicaSet/mock-python-server-9f5b557f5 Containers: master: Container ID: docker://7e258d94c458f47c1add418c7969e77fbaa532c56df7405681e778d5f0e63d01 Image: <image> Image ID: <image_id> Port: 5000/TCP Host Port: 0/TCP State: Running

  28. kubernetes

    I was thinking if opening range of ports in LB will work without opening in service yaml, but I'm not sure whether it'll work that way. Please help in configuring stun/turn for ejabberd running in kubernetes. we need to configure the stun/turn udp ports range (16500-32500) in ejabberd kubernetes server.