Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Bash: Iterating over lines in a variable

How does one properly iterate over lines in bash either in a variable, or from the output of a command? Simply setting the IFS variable to a new line works for the output of a command but not when processing a variable that contains new lines.

For example

This gives the output:

As you can see, echoing the variable or iterating over the cat command prints each of the lines one by one correctly. However, the first for loop prints all the items on a single line. Any ideas?

Alex Spurling's user avatar

  • Just a comment for all answers: I had to do $(echo "$line" | sed -e 's/^[[:space:]]*//') in order to trim the newline character. –  servermanfail Sep 4, 2019 at 10:48

5 Answers 5

With bash, if you want to embed newlines in a string, enclose the string with $'' :

And if you have such a string already in a variable, you can read it line-by-line with:

@wheeler makes a good point about <<< adding a trailing newline.

Suppose the variable ends with a newline

Then the while loop outputs

To get around that, use a redirection from a process substitution instead of a here-string

But now this "fails" for strings without a trailing newline

The read documentation says

The exit status is zero, unless end-of-file is encountered

and because the input does not end with a newline, EOF is encountered before read can get a whole line. read exits non-zero and the while loop completes.

The characters are consumed into the variable though.

So, the absolutely proper way to loop over the lines of a string is:

This outputs the expected for both $list and $list2

glenn jackman's user avatar

  • 42 Just a note that the done <<< "$list" is crucial –  Jason Axelson Sep 26, 2012 at 7:53
  • 32 The reason done <<< "$list" is crucial is because that will pass "$list" as the input to read –  wisbucky Mar 11, 2015 at 21:00
  • 31 I need to stress this: DOUBLE-QUOTING $list is very crucial. –  André Chalella Jun 10, 2015 at 7:44
  • 11 echo "$list" | while ... could appear more clear than ... done <<< "$line" –  kyb May 14, 2018 at 10:07
  • 23 There are downsides to that approach as the while loop will run in a subshell: any variables assigned in the loop will not persist. –  glenn jackman May 14, 2018 at 14:13

You can use while + read :

Btw. the -e option to echo is non-standard. Use printf instead, if you want portability.

maxelost's user avatar

  • 24 Note that if you use this syntax, variables assigned inside the loop won't stick after the loop. Oddly enough, the <<< version suggested by glenn jackman does work with variable assignment. –  Sparhawk Oct 3, 2013 at 4:51
  • 12 @Sparhawk Yes, that's because the pipe starts a subshell executing the while part. The <<< version does not (in new bash versions, at least). –  maxelost Oct 21, 2013 at 13:53

nobody important's user avatar

  • 2 This outputs all items, not lines. –  Tom Aug 13, 2015 at 15:05
  • 12 @TomaszPosłuszny No, this outputs 3 (count is 3) lines on my machine. Note the setting of the IFS. You could also use IFS=$'\n' for the same effect. –  jmiserez May 10, 2016 at 12:33
  • This and @jmiserez proposal both work beautifully, thank you! –  Sebastian B. Jul 16, 2020 at 14:34
  • This is the best option if your text already has a trailing newline. –  wheeler Apr 12, 2023 at 7:22

Here's a funny way of doing your for loop:

A little more sensible/readable would be:

But that's all too complex, you only need a space in there:

You $line variable doesn't contain newlines. It contains instances of \ followed by n . You can see that clearly with:

The substitution is replacing those with spaces, which is enough for it to work in for loops:

Mat's user avatar

  • Interesting, can you explain what is going on here? It looks like you are replacing \n with a new line... What is the difference between the original string and the new one? –  Alex Spurling May 16, 2011 at 13:15
  • @Alex: updated my answer - with a simpler version too :-) –  Mat May 16, 2011 at 13:24
  • 1 I tried this out and it doesn't appear to work if you have spaces in your input. In the example above we have "one\ntwo\nthree" and this works but it fails if we have "entry one\nentry two\nentry three" as it also adds a new line for the space too. –  John Rocha Nov 8, 2012 at 18:15
  • 4 Just a note that instead of defining cr you can use $'\n' . –  devios1 Dec 21, 2013 at 19:12
  • 1 This is the most correct answer. The other answers will get into the loop even if variable is not set or empty . –  Marinos An Apr 23, 2020 at 10:45

You can also first convert the variable into an array, then iterate over this.

This is only usefull if you do not want to mess with the IFS and also have issues with the read command, as it can happen, if you call another script inside the loop that that script can empty your read buffer before returning, as it happened to me.

Torge's user avatar

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged bash ..

  • The Overflow Blog
  • Who owns this tool? You need a software component catalog
  • Down the rabbit hole in the Stack Exchange network
  • Featured on Meta
  • Upcoming privacy updates: removal of the Activity data section and Google...
  • Changing how community leadership works on Stack Exchange: a proposal and...

Hot Network Questions

  • How Are The Four Bars Here Sequenced Down A Step?
  • Why did nobody ever succeed in "clean room" cloning the Apple Macintosh
  • What is a "consecutive stay plan"?
  • "They don’t speak it so much my side of the park." Which park? Which side is which?
  • Book series with parallel earths and a secret governmental agency
  • What is the London Weekly Cap impact if one trip taken outside of zone
  • Optimizing for() loops on Roman Numeral Converter JS
  • Should I put functions in .bashrc, .bash_aliases or .profile
  • Equation of Motion Invariance in Galilean Mechanics
  • Sum up snail number neighbours
  • What happens when the runway is unusable at an isolated aerodrome?
  • Can you cast a Wall of Force into water?
  • Can "innate" magic exist without fostering elitism?
  • Horror short story set on Everest, where climbers left behind are eaten by a creature masquerading as a climber
  • B1/B2 pending, received new citizenship with possibility to use ESTA
  • Can people feel the low heat radiation from very cold surfaces?
  • How can I make a sleek, 3D look in Inkscape Vector for a ring?
  • An unusually tricky Galaxy
  • Who is the king over all the children of pride? (Job 41:34)
  • Emotive Plate Eyes
  • Is Freyd's thesis available online anywhere?
  • Could relativity be consistent if there are multiple light-like fields with different invariant speeds?
  • Is the realization of random variable also a random variable?
  • Short story in which time slips and the passenger pigeon reappears in North America

bash assign multiple lines to variable

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Clean way to write complex multi-line string to a variable

I need to write some complex xml to a variable inside a bash script. The xml needs to be readable inside the bash script as this is where the xml fragment will live, it's not being read from another file or source.

So my question is this if I have a long string which I want to be human readable inside my bash script what is the best way to go about it?

Ideally I want:

  • to not have to escape any of the characters
  • have it break across multiple lines making it human readable
  • keep it's indentation

Can this be done with EOF or something, could anyone give me an example?

ChrisInCambo's user avatar

  • I'm willing to bet that you're just going to dump that data into a stream again. Why store it in a variable when you could make things more complex and use streams? –  Zenexer Sep 26, 2013 at 22:57
  • see this too: Multi-line string with extra space (preserved indentation) –  Yibo Yang Jun 10, 2017 at 16:51

5 Answers 5

This will put your text into your variable without needing to escape the quotes. It will also handle unbalanced quotes (apostrophes, i.e. ' ). Putting quotes around the sentinel (EOF) prevents the text from undergoing parameter expansion. The -d'' causes it to read multiple lines (ignore newlines). read is a Bash built-in so it doesn't require calling an external command such as cat .

Dennis Williamson's user avatar

  • 6 cat is an external command. Not using it saves doing that. Plus, some have the philosophy that if you're using cat with fewer than two arguments "Ur doin' it wrong" (which is distinct from "useless use of cat "). –  Dennis Williamson Oct 9, 2009 at 0:03
  • 13 and never ever indent second EOF.... (multiple table to head bangs involved) –  IljaBek May 18, 2012 at 19:40
  • 12 I tried to use the above statement while set -e . It seems read always returns non-zero. You can thick this behaviour by using ! read -d ....... –  krissi Nov 8, 2012 at 10:56
  • 3 @DennisWilliamson: "Proper" error handling in shell is prohibitively tedious. set -e is imperfect and lays the occasional trap, but makes scripts far more reliable. –  Andrew Nov 19, 2012 at 22:42
  • 13 And if you are using this multi-line String variable to write to a file, put the variable around "QUOTES" like echo "${String}" > /tmp/multiline_file.txt or echo "${String}" | tee /tmp/multiline_file.txt . Took me more than an hour to find that. –  Aditya Apr 20, 2014 at 16:16

You've been almost there. Either you use cat for the assembly of your string or you quote the whole string (in which case you'd have to escape the quotes inside your string):

joschi's user avatar

  • 1 Unfortunately, the apostrophe in "Raphael's" makes the first one not work. –  Dennis Williamson Oct 8, 2009 at 12:37
  • Both assignments work for me eventually. The single quote in VAR1 should not be a problem (at least not for bash). Maybe you have been misled by the syntax highlighting? –  joschi Oct 8, 2009 at 21:13
  • 2 It works in a script, but not at a Bash prompt. Sorry for not being clearer. –  Dennis Williamson Oct 9, 2009 at 6:08
  • 2 It is better to quote EOF as 'EOF' or "EOF" , otherwise shell variables will be parsed. –  Stanislav German-Evtushenko May 6, 2018 at 13:43
  • 1 Be sure to echo "$VAR1" with quotes to inspect your work or newlines will not print. –  rjurney Sep 24, 2020 at 3:04

This should work fine within Bourne shell environment

schweiz's user avatar

  • 3 +1 this solution allow variable substitution like ${foo} –  Offirmo Sep 27, 2012 at 16:54
  • Upside: sh-compatible. Downside: backticks are deprecated/discouraged in bash. Now if I had to choose between sh and bash... –  Zenexer Sep 26, 2013 at 22:55
  • 2 since when are backticks deprecated/discouraged? just curious –  Alexander Mills Apr 21, 2018 at 1:50

Yet another way to do the same...

I like to use variables and special <<- who drop tabulation at begin of each lines to permit script indentation:

warning : there is no blank space before eof but only tabulation .

  • mapfile read entire here document in an array.
  • the syntaxe "${Pattern[*]}" do cast this array into a string.
  • I use IFS=";" because there is no ; in required strings
  • The syntaxe while IFS=";" read file ... prevent IFS to be modified for the rest of the script. In this, only read do use the modified IFS .

F. Hauri  - Give Up GitHub's user avatar

  • Note that mapfile requires Bash 4 or higher. And the syntax "${Pattern[*]}" casts the array into a string when in quotes (as shown in the example code). –  Dennis Williamson Jul 22, 2013 at 23:54
  • Yes, bash 4 was very new when this question was asked. –  F. Hauri - Give Up GitHub Jul 23, 2013 at 2:31

There are too many corner cases in many of the other answers.

To be absolutely sure there are no issues with spaces, tabs, IFS etc., a better approach is to use the "heredoc" construct, but encode the contents of the heredoc using uuencode as explained here:

https://stackoverflow.com/questions/6896025/#11379627 .

Community's user avatar

You must log in to answer this question.

Not the answer you're looking for browse other questions tagged bash ..

  • The Overflow Blog
  • Who owns this tool? You need a software component catalog
  • Down the rabbit hole in the Stack Exchange network
  • Featured on Meta
  • Upcoming privacy updates: removal of the Activity data section and Google...
  • Changing how community leadership works on Stack Exchange: a proposal and...

Hot Network Questions

  • Could the bassist and the rhythm guitarist be the same person?
  • Is there any ethical problem with a tiered grading system?
  • How to set iptables to block incoming requests to the server but still have internet connectivity
  • Kitchen sink slow drainage and P-Trap installation
  • What qualities or factors contribute to Qatar being considered a suitable mediator between Venezuela and the United States?
  • Is it possible to download HTML versions of articles available on publishers' websites?
  • Usage of "I wish… so that…"
  • Is it coincidence that the earth's rotation and revolution are in the same direction?
  • How do you make nice recess in a melamine board for IKEA shelf holders?
  • Why would the triangles join up to a rhombus?
  • B1/B2 pending, received new citizenship with possibility to use ESTA
  • Equation of Motion Invariance in Galilean Mechanics
  • How to prove that the following series converges?
  • Sum up snail number neighbours
  • Emotive Plate Eyes
  • Swap letter cases
  • How to select a boundary edge and generate an outline
  • How to answer vague "tell me about x" questions from recruiter
  • How do I handle white space in cards with varying amounts of information?
  • extremely slow leak in tire
  • Horror short story set on Everest, where climbers left behind are eaten by a creature masquerading as a climber
  • As a private tutor, is it ethical to recommend the student take more classes?
  • How does one perform induction on integers in both directions?
  • "They don’t speak it so much my side of the park." Which park? Which side is which?

bash assign multiple lines to variable

We Love Servers.

  • WHY IOFLOOD?
  • BARE METAL CLOUD
  • DEDICATED SERVERS

Bash Multi-Line Strings: Methods and Best Practices

Bash script on a computer screen displaying a multiline string emphasized with text blocks and continuation symbols

Are you finding it challenging to work with multiline strings in Bash? You’re not alone. Many developers find themselves puzzled when it comes to handling multiline strings in Bash. But think of Bash as a skilled poet, capable of handling verses that span multiple lines, making it a versatile and handy tool for various tasks.

This guide will walk you through the ins and outs of working with multiline strings in Bash , from the basics to more advanced techniques. We’ll cover everything from creating multiline strings using Here Documents and Here Strings, to dealing with common issues and even troubleshooting them.

So, let’s dive in and start mastering multiline strings in Bash!

TL;DR: How Do I Create a Multiline String in Bash?

In Bash, you can create a multiline string using a Here Document or Here String . These are special types of redirections that allow you to create multiline strings easily and efficiently.

Here’s a simple example using a Here Document:

In this example, we use the cat command in conjunction with a Here Document ( cat << EOF ). This is just a basic way to create multiline strings in Bash. But there’s much more to learn about handling multiline strings in Bash, including advanced techniques and alternative approaches. Continue reading for more detailed explanations and advanced usage scenarios.

Table of Contents

Bash Multiline String Basics: Here Documents and Here Strings

Advanced bash multiline string techniques, exploring alternative methods for bash multiline strings, troubleshooting bash multiline strings: common issues and solutions, understanding bash’s string handling capabilities, beyond multiline strings: expanding your bash knowledge, wrapping up: mastering bash multiline strings.

Bash provides two powerful features for creating multiline strings: Here Documents and Here Strings. Let’s dive into each of these and understand how they work, their advantages, and potential pitfalls.

Here Documents

A Here Document is a type of redirection that allows you to create multiline strings. It uses a form of I/O redirection to feed a command list to an interactive program or command, such as ftp, cat, or the ex text editor.

Here’s an example of how to create a multiline string using a Here Document:

In this example, EndOfText is a delimiter that marks the beginning and end of the text block. The cat command then processes this text block as its standard input.

Here Strings

Here Strings are another way to create multiline strings in Bash. They are similar to Here Documents but are used with a single line of input. They are useful when you want to read multiline input into a variable or pass multiline input to a command.

Here’s an example of a Here String:

In this example, we use the read command with the -d option (which changes the delimiter from a newline to a null character) and a Here String to read multiple lines of input into a variable. The echo command then prints the multiline string.

While Here Documents and Here Strings are powerful, they have some potential pitfalls. For example, if you don’t quote the delimiter ( EndOfText in our examples), Bash will perform parameter expansion, command substitution, and arithmetic expansion. To avoid this, you can quote the delimiter as shown:

In this example, $HOME and $(date) are not expanded and are treated as literal strings. This can be useful when you want to preserve special characters in your multiline strings.

As you become more comfortable with Bash, you’ll find that multiline strings can do much more than just store text. They can also incorporate variables and command substitutions, making them a powerful tool for generating complex strings and scripts.

Incorporating Variables

You can easily include variables in your multiline strings. When the shell encounters a dollar-sign ($), it interprets the following text as a variable name and replaces it with the variable’s value.

Here’s an example of how to incorporate a variable in a multiline string:

In this example, we first define a variable name with the value Alice . We then use this variable in our multiline string. The $name in the multiline string is replaced by Alice , which is the value of the name variable.

Command Substitution

Command substitution allows you to execute a command and substitute its output in place. This is done by wrapping the command with $( ) .

Here’s an example of command substitution in a multiline string:

In this example, the $(date) command is replaced by the current date and time. This demonstrates how you can dynamically generate parts of your multiline string.

Using variables and command substitutions in your multiline strings allows you to create more complex and dynamic strings. However, remember to be mindful of potential issues with whitespace and special characters, which we’ll cover in the next section.

While Here Documents and Here Strings are commonly used to handle multiline strings in Bash, there are alternative methods that can be just as effective, depending on your specific needs. Let’s explore some of these alternatives, their advantages, and potential drawbacks.

Using Printf Function

The printf function is a powerful tool that can also be used to create multiline strings. It provides more control over the formatting of the output than the echo command.

Here’s an example of how to create a multiline string using printf :

In this example, we use printf and newline characters ( \n ) to create a multiline string. The advantage of printf is its flexibility in controlling the output format, but it can be more complex to use than Here Documents or Here Strings.

Combining Echo Statements

Another alternative to create multiline strings in Bash is by combining multiple echo statements. This can be a simpler method when dealing with shorter strings.

Here’s an example of how to create a multiline string by combining echo statements:

In this example, we use the -e option with echo to interpret backslash escapes, allowing us to include newline characters ( \n ) in the string. This method is simple and straightforward, but can become unwieldy with larger strings.

Choosing the right method to handle multiline strings in Bash depends on your specific requirements and the complexity of the strings. Here Documents and Here Strings are powerful and flexible, but printf and combined echo statements can also be effective in certain scenarios. It’s important to understand the advantages and potential pitfalls of each method to make an informed choice.

Working with multiline strings in Bash can sometimes lead to unexpected results. Issues can arise due to whitespace, special characters, or variable substitution. Let’s discuss these common problems and how to solve them.

Dealing with Whitespace

In Bash, leading whitespace in a Here Document or Here String can cause unexpected results. Bash does not strip leading tabs or spaces, which means they become part of the string.

Consider the following example:

In this example, the leading spaces are preserved in the output. If this is not the desired behavior, you can use the - option with the Here Document to strip leading tabs (but not spaces):

Handling Special Characters

Special characters, such as $ , \ , and ““ , can cause issues in multiline strings because Bash treats them as part of its syntax. To prevent Bash from interpreting these characters, you can quote the Here Document delimiter:

In this example, the special characters are preserved in the output because we quoted the EOF delimiter.

Variable Substitution Pitfalls

When using variables in multiline strings, remember that Bash performs variable substitution when it sees a dollar-sign ($). If you want to include a literal dollar-sign, you need to escape it with a backslash ( \$ ).

Here’s an example:

In this example, we escape the dollar-sign that we want to include in the output, and Bash substitutes ${price} with its value.

Understanding these common issues and how to handle them can help you avoid unexpected results when working with multiline strings in Bash.

Bash, or the Bourne Again Shell, is a powerful command-line interface used in many Linux distributions. It’s renowned for its ability to handle strings, or sequences of characters, which are a fundamental data type in Bash scripting.

Bash and Multiline Strings: The Basics

A basic string in Bash is typically defined using double or single quotes, like so:

However, when it comes to multiline strings, the process is a bit more involved. Multiline strings are strings that span across multiple lines. Bash does not inherently support multiline strings like some other programming languages, but it provides features such as Here Documents and Here Strings to handle multiline strings effectively.

Here Documents: A Closer Look

A Here Document is a type of redirection that allows the creation of multiline strings. It uses a form of I/O redirection to feed a command list to an interactive program or command. In essence, it’s a way of embedding a block of input text within a script.

Here’s a different example of a Here Document:

In this example, we use the read command with the -r option (to prevent backslash escapes from being interpreted) and the -d option (to change the delimiter from a newline to a null character) to read the Here Document into a variable.

Here Strings: A Deeper Dive

Here Strings, on the other hand, are a type of redirection that allow you to pass a word or string directly to a command’s standard input, similar to a Here Document, but they are used with a single line of input.

Here’s a unique example of a Here String:

In this example, we use a Here String to pass a string to the cut command, which then extracts the first word of the string.

Understanding the fundamentals of Bash’s string handling capabilities, and the concepts underlying Here Documents and Here Strings, is crucial when working with multiline strings in Bash.

Multiline strings in Bash are not just a standalone concept. They play a significant role in larger Bash scripts and projects. Understanding and mastering them can significantly enhance your Bash scripting skills.

Relevance in Larger Bash Scripts

In more extensive Bash projects, multiline strings can be utilized to create complex scripts, store large amounts of data, or even generate other scripts. They can be used to create SQL queries, generate HTML or XML files, or create configuration files, among other uses. The ability to include variables and command substitutions within these strings adds another layer of dynamic functionality.

Exploring Related Concepts

Once you’re comfortable with multiline strings, there are related concepts in Bash that are worth exploring. String manipulation, for example, allows you to modify strings in various ways, such as extracting substrings, replacing substrings, or changing the case of strings.

Regular expressions, another powerful feature in Bash, can be used to match and manipulate strings based on patterns. They can be particularly useful when working with large amounts of text or when you need to parse complex data.

Further Resources for Bash Mastery

To deepen your understanding of Bash and its features, here are some resources that provide in-depth tutorials and guides:

  • GNU Bash Manual – This is the official manual for Bash, providing a comprehensive overview of the shell’s features.

Advanced Bash-Scripting Guide – This guide covers Bash scripting in detail, including advanced topics like string manipulation and regular expressions.

Bash Academy – Bash Academy offers interactive lessons on various Bash concepts, including scripting, string handling, and regular expressions.

By understanding multiline strings and related concepts in Bash, you’ll be well-equipped to write efficient, robust Bash scripts for a wide variety of tasks.

In this comprehensive guide, we’ve delved into the intricacies of handling multiline strings in Bash, a fundamental yet powerful aspect of Bash scripting.

We began with the basics, learning how to create multiline strings using Here Documents and Here Strings. We then ventured into more advanced territory, exploring how to incorporate variables and command substitutions within these strings, thereby adding a layer of dynamic functionality to our scripts.

We also tackled common challenges you might face when working with multiline strings in Bash, such as issues with whitespace, special characters, and variable substitution. For each challenge, we provided solutions and workarounds to help you overcome these hurdles and continue scripting efficiently.

Moreover, we didn’t limit ourselves to just Here Documents and Here Strings. We also looked at alternative approaches for handling multiline strings, such as the printf function and combining echo statements. Here’s a quick comparison of these methods:

Whether you’re just starting out with Bash or you’re looking to level up your scripting skills, we hope this guide has given you a deeper understanding of Bash multiline strings and how to handle them effectively.

With the knowledge you’ve gained, you’re now well-equipped to tackle any challenges that come your way when dealing with multiline strings in Bash. Happy scripting!

About Author

Gabriel Ramuglia

Gabriel Ramuglia

Gabriel is the owner and founder of IOFLOOD.com , an unmanaged dedicated server hosting company operating since 2010.Gabriel loves all things servers, bandwidth, and computer programming and enjoys sharing his experience on these topics with readers of the IOFLOOD blog.

Related Posts

Image of a Linux terminal illustrating the installation of the date command for managing system date and time

LinuxSimply

Home > Bash Scripting Tutorial > Bash Variables > Variable Declaration and Assignment > How to Assign Variable in Bash Script? [8 Practical Cases]

How to Assign Variable in Bash Script? [8 Practical Cases]

Mohammad Shah Miran

Variables allow you to store and manipulate data within your script, making it easier to organize and access information. In Bash scripts , variable assignment follows a straightforward syntax, but it offers a range of options and features that can enhance the flexibility and functionality of your scripts. In this article, I will discuss modes to assign variable in the Bash script . As the Bash script offers a range of methods for assigning variables, I will thoroughly delve into each one.

Key Takeaways

  • Getting Familiar With Different Types Of Variables.
  • Learning how to assign single or multiple bash variables.
  • Understanding the arithmetic operation in Bash Scripting.

Free Downloads

Local vs global variable assignment.

In programming, variables are used to store and manipulate data. There are two main types of variable assignments: local and global .

A. Local Variable Assignment

In programming, a local variable assignment refers to the process of declaring and assigning a variable within a specific scope, such as a function or a block of code. Local variables are temporary and have limited visibility, meaning they can only be accessed within the scope in which they are defined.

Here are some key characteristics of local variable assignment:

  • Local variables in bash are created within a function or a block of code.
  • By default, variables declared within a function are local to that function.
  • They are not accessible outside the function or block in which they are defined.
  • Local variables typically store temporary or intermediate values within a specific context.

Here is an example in Bash script.

In this example, the variable x is a local variable within the scope of the my_function function. It can be accessed and used within the function, but accessing it outside the function will result in an error because the variable is not defined in the outer scope.

B. Global Variable Assignment

In Bash scripting, global variables are accessible throughout the entire script, regardless of the scope in which they are declared. Global variables can be accessed and modified from any script part, including within functions.

Here are some key characteristics of global variable assignment:

  • Global variables in bash are declared outside of any function or block.
  • They are accessible throughout the entire script.
  • Any variable declared outside of a function or block is considered global by default.
  • Global variables can be accessed and modified from any script part, including within functions.

Here is an example in Bash script given in the context of a global variable .

It’s important to note that in bash, variable assignment without the local keyword within a function will create a global variable even if there is a global variable with the same name. To ensure local scope within a function , using the local keyword explicitly is recommended.

Additionally, it’s worth mentioning that subprocesses spawned by a bash script, such as commands executed with $(…) or backticks , create their own separate environments, and variables assigned within those subprocesses are not accessible in the parent script .

8 Different Cases to Assign Variables in Bash Script

In Bash scripting , there are various cases or scenarios in which you may need to assign variables. Here are some common cases I have described below. These examples cover various scenarios, such as assigning single variables , multiple variable assignments in a single line , extracting values from command-line arguments , obtaining input from the user , utilizing environmental variables, etc . So let’s start.

Case 01: Single Variable Assignment

To assign a value to a single variable in Bash script , you can use the following syntax:

However, replace the variable with the name of the variable you want to assign, and the value with the desired value you want to assign to that variable.

To assign a single value to a variable in Bash , you can go in the following manner:

Steps to Follow >

❶ At first, launch an Ubuntu Terminal .

❷ Write the following command to open a file in Nano :

  • nano : Opens a file in the Nano text editor.
  • single_variable.sh : Name of the file.

❸ Copy the script mentioned below:

The first line #!/bin/bash specifies the interpreter to use ( /bin/bash ) for executing the script. Next, variable var_int contains an integer value of 23 and displays with the echo command .

❹ Press CTRL+O and ENTER to save the file; CTRL+X to exit.

❺ Use the following command to make the file executable :

  • chmod : changes the permissions of files and directories.
  • u+x : Here, u refers to the “ user ” or the owner of the file and +x specifies the permission being added, in this case, the “ execute ” permission. When u+x is added to the file permissions, it grants the user ( owner ) permission to execute ( run ) the file.
  • single_variable.sh : File name to which the permissions are being applied.

❻ Run the script by using the following command:

Single Variable Assignment

Case 02: Multi-Variable Assignment in a Single Line of a Bash Script

Multi-variable assignment in a single line is a concise and efficient way of assigning values to multiple variables simultaneously in Bash scripts . This method helps reduce the number of lines of code and can enhance readability in certain scenarios. Here’s an example of a multi-variable assignment in a single line.

You can follow the steps of Case 01 , to save & make the script executable.

Script (multi_variable.sh) >

The first line #!/bin/bash specifies the interpreter to use ( /bin/bash ) for executing the script. Then, three variables x , y , and z are assigned values 1 , 2 , and 3 , respectively. The echo statements are used to print the values of each variable. Following that, two variables var1 and var2 are assigned values “ Hello ” and “ World “, respectively. The semicolon (;) separates the assignment statements within a single line. The echo statement prints the values of both variables with a space in between. Lastly, the read command is used to assign values to var3 and var4. The <<< syntax is known as a here-string , which allows the string “ Hello LinuxSimply ” to be passed as input to the read command . The input string is split into words, and the first word is assigned to var3 , while the remaining words are assigned to var4 . Finally, the echo statement displays the values of both variables.

Multi-Variable Assignment in a Single Line of a Bash Script

Case 03: Assigning Variables From Command-Line Arguments

In Bash , you can assign variables from command-line arguments using special variables known as positional parameters . Here is a sample code demonstrated below.

Script (var_as_argument.sh) >

The provided Bash script starts with the shebang ( #!/bin/bash ) to use Bash shell. The script assigns the first command-line argument to the variable name , the second argument to age , and the third argument to city . The positional parameters $1 , $2 , and $3 , which represent the values passed as command-line arguments when executing the script. Then, the script uses echo statements to display the values of the assigned variables.

Assigning Variables from Command-Line Arguments

Case 04: Assign Value From Environmental Bash Variable

In Bash , you can also assign the value of an Environmental Variable to a variable. To accomplish the task you can use the following syntax :

However, make sure to replace ENV_VARIABLE_NAME with the actual name of the environment variable you want to assign. Here is a sample code that has been provided for your perusal.

Script (env_variable.sh) >

The first line #!/bin/bash specifies the interpreter to use ( /bin/bash ) for executing the script. The value of the USER environment variable, which represents the current username, is assigned to the Bash variable username. Then the output is displayed using the echo command.

Assign Value from Environmental Bash Variable

Case 05: Default Value Assignment

In Bash , you can assign default values to variables using the ${variable:-default} syntax . Note that this default value assignment does not change the original value of the variable; it only assigns a default value if the variable is empty or unset . Here’s a script to learn how it works.

Script (default_variable.sh) >

The first line #!/bin/bash specifies the interpreter to use ( /bin/bash ) for executing the script. The next line stores a null string to the variable . The ${ variable:-Softeko } expression checks if the variable is unset or empty. As the variable is empty, it assigns the default value ( Softeko in this case) to the variable . In the second portion of the code, the LinuxSimply string is stored as a variable. Then the assigned variable is printed using the echo command .

Default Value Assignment

Case 06: Assigning Value by Taking Input From the User

In Bash , you can assign a value from the user by using the read command. Remember we have used this command in Case 2 . Apart from assigning value in a single line, the read command allows you to prompt the user for input and assign it to a variable. Here’s an example given below.

Script (user_variable.sh) >

The first line #!/bin/bash specifies the interpreter to use ( /bin/bash ) for executing the script. The read command is used to read the input from the user and assign it to the name variable . The user is prompted with the message “ Enter your name: “, and the value they enter is stored in the name variable. Finally, the script displays a message using the entered value.

Assigning Value by Taking Input from the User

Case 07: Using the “let” Command for Variable Assignment

In Bash , the let command can be used for arithmetic operations and variable assignment. When using let for variable assignment, it allows you to perform arithmetic operations and assign the result to a variable .

Script (let_var_assign.sh) >

The first line #!/bin/bash specifies the interpreter to use ( /bin/bash ) for executing the script. then the let command performs arithmetic operations and assigns the results to variables num. Later, the echo command has been used to display the value stored in the num variable.

Using the let Command for Variable Assignment

Case 08: Assigning Shell Command Output to a Variable

Lastly, you can assign the output of a shell command to a variable using command substitution . There are two common ways to achieve this: using backticks ( “) or using the $()   syntax. Note that $() syntax is generally preferable over backticks as it provides better readability and nesting capability, and it avoids some issues with quoting. Here’s an example that I have provided using both cases.

Script (shell_command_var.sh) >

The first line #!/bin/bash specifies the interpreter to use ( /bin/bash ) for executing the script. The output of the ls -l command (which lists the contents of the current directory in long format) allocates to the variable output1 using backticks . Similarly, the output of the date command (which displays the current date and time) is assigned to the variable output2 using the $() syntax . The echo command displays both output1 and output2 .

Assigning Shell Command Output to a Variable

Assignment on Assigning Variables in Bash Scripts

Finally, I have provided two assignments based on today’s discussion. Don’t forget to check this out.

  • Difference: ?
  • Quotient: ?
  • Remainder: ?
  • Write a Bash script to find and display the name of the largest file using variables in a specified directory.

In conclusion, assigning variable Bash is a crucial aspect of scripting, allowing developers to store and manipulate data efficiently. This article explored several cases to assign variables in Bash, including single-variable assignments , multi-variable assignments in a single line , assigning values from environmental variables, and so on. Each case has its advantages and limitations, and the choice depends on the specific needs of the script or program. However, if you have any questions regarding this article, feel free to comment below. I will get back to you soon. Thank You!

People Also Ask

Related Articles

  • How to Declare Variable in Bash Scripts? [5 Practical Cases]
  • Bash Variable Naming Conventions in Shell Script [6 Rules]
  • How to Check Variable Value Using Bash Scripts? [5 Cases]
  • How to Use Default Value in Bash Scripts? [2 Methods]
  • How to Use Set – $Variable in Bash Scripts? [2 Examples]
  • How to Read Environment Variables in Bash Script? [2 Methods]
  • How to Export Environment Variables with Bash? [4 Examples]

<< Go Back to Variable Declaration and Assignment  | Bash Variables | Bash Scripting Tutorial

Mohammad Shah Miran

Mohammad Shah Miran

Hey, I'm Mohammad Shah Miran, previously worked as a VBA and Excel Content Developer at SOFTEKO, and for now working as a Linux Content Developer Executive in LinuxSimply Project. I completed my graduation from Bangladesh University of Engineering and Technology (BUET). As a part of my job, i communicate with Linux operating system, without letting the GUI to intervene and try to pass it to our audience.

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

How-To Geek

How to work with variables in bash.

Want to take your Linux command-line skills to the next level? Here's everything you need to know to start working with variables.

Hannah Stryker / How-To Geek

Quick Links

Variables 101, examples of bash variables, how to use bash variables in scripts, how to use command line parameters in scripts, working with special variables, environment variables, how to export variables, how to quote variables, echo is your friend, key takeaways.

  • Variables are named symbols representing strings or numeric values. They are treated as their value when used in commands and expressions.
  • Variable names should be descriptive and cannot start with a number or contain spaces. They can start with an underscore and can have alphanumeric characters.
  • Variables can be used to store and reference values. The value of a variable can be changed, and it can be referenced by using the dollar sign $ before the variable name.

Variables are vital if you want to write scripts and understand what that code you're about to cut and paste from the web will do to your Linux computer. We'll get you started!

Variables are named symbols that represent either a string or numeric value. When you use them in commands and expressions, they are treated as if you had typed the value they hold instead of the name of the variable.

To create a variable, you just provide a name and value for it. Your variable names should be descriptive and remind you of the value they hold. A variable name cannot start with a number, nor can it contain spaces. It can, however, start with an underscore. Apart from that, you can use any mix of upper- and lowercase alphanumeric characters.

Here, we'll create five variables. The format is to type the name, the equals sign = , and the value. Note there isn't a space before or after the equals sign. Giving a variable a value is often referred to as assigning a value to the variable.

We'll create four string variables and one numeric variable,

my_name=Dave

my_boost=Linux

his_boost=Spinach

this_year=2019

Defining variables in Linux.

To see the value held in a variable, use the echo command. You must precede the variable name with a dollar sign $ whenever you reference the value it contains, as shown below:

echo $my_name

echo $my_boost

echo $this_year

Using echo to display the values held in variables in a terminal window

Let's use all of our variables at once:

echo "$my_boost is to $me as $his_boost is to $him (c) $this_year"

echo

The values of the variables replace their names. You can also change the values of variables. To assign a new value to the variable, my_boost , you just repeat what you did when you assigned its first value, like so:

my_boost=Tequila

my_boost=Tequila in a terminal window

If you re-run the previous command, you now get a different result:

echo

So, you can use the same command that references the same variables and get different results if you change the values held in the variables.

We'll talk about quoting variables later. For now, here are some things to remember:

  • A variable in single quotes ' is treated as a literal string, and not as a variable.
  • Variables in quotation marks " are treated as variables.
  • To get the value held in a variable, you have to provide the dollar sign $ .
  • A variable without the dollar sign $ only provides the name of the variable.

Correct an incorrect examples of referencing variables in a terminal window

You can also create a variable that takes its value from an existing variable or number of variables. The following command defines a new variable called drink_of_the_Year, and assigns it the combined values of the my_boost and this_year variables:

drink_of-the_Year="$my_boost $this_year"

echo drink_of_the-Year

drink_of-the_Year=

Scripts would be completely hamstrung without variables. Variables provide the flexibility that makes a script a general, rather than a specific, solution. To illustrate the difference, here's a script that counts the files in the /dev directory.

Type this into a text file, and then save it as fcnt.sh (for "file count"):

#!/bin/bashfolder_to_count=/devfile_count=$(ls $folder_to_count | wc -l)echo $file_count files in $folder_to_count

Before you can run the script, you have to make it executable, as shown below:

chmod +x fcnt.sh

chmod +x fcnt.sh in a terminal window

Type the following to run the script:

./fcnt.sh in a terminal window

This prints the number of files in the /dev directory. Here's how it works:

  • A variable called folder_to_count is defined, and it's set to hold the string "/dev."
  • Another variable, called file_count , is defined. This variable takes its value from a command substitution. This is the command phrase between the parentheses $( ) . Note there's a dollar sign $ before the first parenthesis. This construct $( ) evaluates the commands within the parentheses, and then returns their final value. In this example, that value is assigned to the file_count variable. As far as the file_count variable is concerned, it's passed a value to hold; it isn't concerned with how the value was obtained.
  • The command evaluated in the command substitution performs an ls file listing on the directory in the folder_to_count variable, which has been set to "/dev." So, the script executes the command "ls /dev."
  • The output from this command is piped into the wc command. The -l (line count) option causes wc to count the number of lines in the output from the ls command. As each file is listed on a separate line, this is the count of files and subdirectories in the "/dev" directory. This value is assigned to the file_count variable.
  • The final line uses echo to output the result.

But this only works for the "/dev" directory. How can we make the script work with any directory? All it takes is one small change.

Many commands, such as ls and wc , take command line parameters. These provide information to the command, so it knows what you want it to do. If you want ls to work on your home directory and also to show hidden files , you can use the following command, where the tilde ~ and the -a (all) option are command line parameters:

Our scripts can accept command line parameters. They're referenced as $1 for the first parameter, $2 as the second, and so on, up to $9 for the ninth parameter. (Actually, there's a $0 , as well, but that's reserved to always hold the script.)

You can reference command line parameters in a script just as you would regular variables. Let's modify our script, as shown below, and save it with the new name fcnt2.sh :

#!/bin/bashfolder_to_count=$1file_count=$(ls $folder_to_count | wc -l)echo $file_count files in $folder_to_count

This time, the folder_to_count variable is assigned the value of the first command line parameter, $1 .

The rest of the script works exactly as it did before. Rather than a specific solution, your script is now a general one. You can use it on any directory because it's not hardcoded to work only with "/dev."

Here's how you make the script executable:

chmod +x fcnt2.sh

chmod +x fcnt2.sh in a terminal window

Now, try it with a few directories. You can do "/dev" first to make sure you get the same result as before. Type the following:

./fnct2.sh /dev

./fnct2.sh /etc

./fnct2.sh /bin

./fnct2.sh /dev in a terminal window

You get the same result (207 files) as before for the "/dev" directory. This is encouraging, and you get directory-specific results for each of the other command line parameters.

To shorten the script, you could dispense with the variable, folder_to_count , altogether, and just reference $1 throughout, as follows:

#!/bin/bash file_count=$(ls $1 wc -l) echo $file_count files in $1

We mentioned $0 , which is always set to the filename of the script. This allows you to use the script to do things like print its name out correctly, even if it's renamed. This is useful in logging situations, in which you want to know the name of the process that added an entry.

The following are the other special preset variables:

  • $# : How many command line parameters were passed to the script.
  • $@ : All the command line parameters passed to the script.
  • $? : The exit status of the last process to run.
  • $$ : The Process ID (PID) of the current script.
  • $USER : The username of the user executing the script.
  • $HOSTNAME : The hostname of the computer running the script.
  • $SECONDS : The number of seconds the script has been running for.
  • $RANDOM : Returns a random number.
  • $LINENO : Returns the current line number of the script.

You want to see all of them in one script, don't you? You can! Save the following as a text file called, special.sh :

#!/bin/bashecho "There were $# command line parameters"echo "They are: $@"echo "Parameter 1 is: $1"echo "The script is called: $0"# any old process so that we can report on the exit statuspwdecho "pwd returned $?"echo "This script has Process ID $$"echo "The script was started by $USER"echo "It is running on $HOSTNAME"sleep 3echo "It has been running for $SECONDS seconds"echo "Random number: $RANDOM"echo "This is line number $LINENO of the script"

Type the following to make it executable:

chmod +x special.sh

fig13 in a terminal window

Now, you can run it with a bunch of different command line parameters, as shown below.

./special.sh alpha bravo charlie 56 2048 Thursday in a terminal window

Bash uses environment variables to define and record the properties of the environment it creates when it launches. These hold information Bash can readily access, such as your username, locale, the number of commands your history file can hold, your default editor, and lots more.

To see the active environment variables in your Bash session, use this command:

env | less in a terminal window

If you scroll through the list, you might find some that would be useful to reference in your scripts.

List of environment variables in less in a terminal window

When a script runs, it's in its own process, and the variables it uses cannot be seen outside of that process. If you want to share a variable with another script that your script launches, you have to export that variable. We'll show you how to this with two scripts.

First, save the following with the filename script_one.sh :

#!/bin/bashfirst_var=alphasecond_var=bravo# check their valuesecho "$0: first_var=$first_var, second_var=$second_var"export first_varexport second_var./script_two.sh# check their values againecho "$0: first_var=$first_var, second_var=$second_var"

This creates two variables, first_var and second_var , and it assigns some values. It prints these to the terminal window, exports the variables, and calls script_two.sh . When script_two.sh terminates, and process flow returns to this script, it again prints the variables to the terminal window. Then, you can see if they changed.

The second script we'll use is script_two.sh . This is the script that script_one.sh calls. Type the following:

#!/bin/bash# check their valuesecho "$0: first_var=$first_var, second_var=$second_var"# set new valuesfirst_var=charliesecond_var=delta# check their values againecho "$0: first_var=$first_var, second_var=$second_var"

This second script prints the values of the two variables, assigns new values to them, and then prints them again.

To run these scripts, you have to type the following to make them executable:

chmod +x script_one.shchmod +x script_two.sh

chmod +x script_one.sh in a terminal window

And now, type the following to launch script_one.sh :

./script_one.sh

./script_one.sh in a terminal window

This is what the output tells us:

  • script_one.sh prints the values of the variables, which are alpha and bravo.
  • script_two.sh prints the values of the variables (alpha and bravo) as it received them.
  • script_two.sh changes them to charlie and delta.
  • script_one.sh prints the values of the variables, which are still alpha and bravo.

What happens in the second script, stays in the second script. It's like copies of the variables are sent to the second script, but they're discarded when that script exits. The original variables in the first script aren't altered by anything that happens to the copies of them in the second.

You might have noticed that when scripts reference variables, they're in quotation marks " . This allows variables to be referenced correctly, so their values are used when the line is executed in the script.

If the value you assign to a variable includes spaces, they must be in quotation marks when you assign them to the variable. This is because, by default, Bash uses a space as a delimiter.

Here's an example:

site_name=How-To Geek

site_name=How-To Geek in a terminal window

Bash sees the space before "Geek" as an indication that a new command is starting. It reports that there is no such command, and abandons the line. echo shows us that the site_name variable holds nothing — not even the "How-To" text.

Try that again with quotation marks around the value, as shown below:

site_name="How-To Geek"

site_name=

This time, it's recognized as a single value and assigned correctly to the site_name variable.

It can take some time to get used to command substitution, quoting variables, and remembering when to include the dollar sign.

Before you hit Enter and execute a line of Bash commands, try it with echo in front of it. This way, you can make sure what's going to happen is what you want. You can also catch any mistakes you might have made in the syntax.

  • How to Create Multi-Line String in Bash
  • Linux Howtos

Use here-document to Make Multi-Line String in Bash

Use shell variable to make multi-line string in bash, use printf to make multi-line string in bash, use echo with the -e option to make multi-line string in bash, use echo to make multi-line string in bash.

How to Create Multi-Line String in Bash

This tutorial demonstrates different ways to print a multi-line string to a file in bash without putting extra space (indentation) by the use of here-document , shell variable, printf , echo , and echo with -e option.

Here-document provides an interactive way to input multi-line string into a file. The EOF is known as the Here Tag . The Here Tag tells the shell that you will input a multi-line string until the Here Tag since it acts as a delimiter. The << is used to set the Here Tag . The > is used for input redirection. It redirects the input to the specified file, output.txt , in our case.

Let us check the content of the output.txt file with the cat command.

From the output, we see that every set of words has its own line, and there are no extra spaces.

Here, we are using a shell variable named greet . We have assigned a multi-line string to greet .

The command below gets the multi-line string in the shell variable, greet , and redirects it to the specified file, multiline.txt , using > .

Check the content of the multiline.txt with the cat command.

We can use printf with the new line character and redirect the output to a file using > . The content in the file does not have extra spaces.

Print out the content of multiline.txt with the cat command.

The following bash script prints the words to multiline.txt without any extra spaces. The -e option enables the interpretation of escape characters in the variable greet .

Print out the content of multiline.txt with the cat command

The script below assigns a multi-line string to a variable named greet . Next, the content of the variable is redirected to the multiline.txt files using > . The quotes on the greet variable preserve the new lines.

Show content of multiline.txt with cat command.

Fumbani Banda avatar

Fumbani is a tech enthusiast. He enjoys writing on Linux and Python as well as contributing to open-source projects.

Related Article - Bash String

  • How to Remove First Character From String in Bash
  • How to Remove Newline From String in Bash
  • How to Split String Into Variables in Bash
  • How to Get Length of String in Bash
  • How to Convert String to Integer in Bash
  • String Comparison Operator in Bash
  • Trending Categories

Data Structure

  • Selected Reading
  • UPSC IAS Exams Notes
  • Developer's Best Practices
  • Questions and Answers
  • Effective Resume Writing
  • HR Interview Questions
  • Computer Glossary

How to write multiple line strings using Bash with variables on Linux?

Setting a variable to a single line in bash and then printing it to console is a fairly easy process, but if we want to write multiple line strings using Bash then we have to consider different approaches.

In total there are three approaches that we can make use of, all of these are mentioned below with examples.

Multiline with

We can make use of the symbol to make sure that whatever string we write has a newline in between them. With this approach we can write as many lines as possible, we just need to write the same number of ’s in the string.

Multiline String

Just make sure to put the entire string in double quotes.

Use the Heredoc approach.

Mukul Latiyan

Related Articles

  • Bash Special Variables in Linux
  • Parse Command Line Arguments in Bash on Linux
  • How to concatenate multiple C++ strings on one line?
  • Preserve Bash History in Multiple Terminal Windows on Linux
  • How to Clear BASH Command Line History in Linux?
  • How to create a CPU spike with bash command on Linux?
  • Introduction to Bash Globbing on Linux
  • How to replace spaces in file names using bash script on Linux?
  • How to split strings on multiple delimiters with Python?
  • Escaping Characters in Bash on Linux
  • String Manipulation in Bash on Linux
  • Extracting a substring using Linux bash
  • Exclude Multiple Patterns With Grep on Linux
  • How to write a Multiple-Line Comment in Swift?
  • How to match two strings that are present in one line with grep in Linux?

Kickstart Your Career

Get certified by completing the course

Dockerfile reference

Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. This page describes the commands you can use in a Dockerfile.

The Dockerfile supports the following instructions:

Here is the format of the Dockerfile:

The instruction is not case-sensitive. However, convention is for them to be UPPERCASE to distinguish them from arguments more easily.

Docker runs instructions in a Dockerfile in order. A Dockerfile must begin with a FROM instruction . This may be after parser directives , comments , and globally scoped ARGs . The FROM instruction specifies the parent image from which you are building. FROM may only be preceded by one or more ARG instructions, which declare arguments that are used in FROM lines in the Dockerfile.

BuildKit treats lines that begin with # as a comment, unless the line is a valid parser directive . A # marker anywhere else in a line is treated as an argument. This allows statements like:

Comment lines are removed before the Dockerfile instructions are executed. The comment in the following example is removed before the shell executes the echo command.

The following examples is equivalent.

Comments don't support line continuation characters.

Note on whitespace For backward compatibility, leading whitespace before comments ( # ) and instructions (such as RUN ) are ignored, but discouraged. Leading whitespace is not preserved in these cases, and the following examples are therefore equivalent: copying = false, 2000);"> # this is a comment-line RUN echo hello RUN echo world copying = false, 2000);"> # this is a comment-line RUN echo hello RUN echo world Whitespace in instruction arguments, however, isn't ignored. The following example prints hello world with leading whitespace as specified: copying = false, 2000);"> RUN echo "\ hello\ world"

Parser directives

Parser directives are optional, and affect the way in which subsequent lines in a Dockerfile are handled. Parser directives don't add layers to the build, and don't show up as build steps. Parser directives are written as a special type of comment in the form # directive=value . A single directive may only be used once.

Once a comment, empty line or builder instruction has been processed, BuildKit no longer looks for parser directives. Instead it treats anything formatted as a parser directive as a comment and doesn't attempt to validate if it might be a parser directive. Therefore, all parser directives must be at the top of a Dockerfile.

Parser directives aren't case-sensitive, but they're lowercase by convention. It's also conventional to include a blank line following any parser directives. Line continuation characters aren't supported in parser directives.

Due to these rules, the following examples are all invalid:

Invalid due to line continuation:

Invalid due to appearing twice:

Treated as a comment because it appears after a builder instruction:

Treated as a comment because it appears after a comment that isn't a parser directive:

The following unknowndirective is treated as a comment because it isn't recognized. The known syntax directive is treated as a comment because it appears after a comment that isn't a parser directive.

Non line-breaking whitespace is permitted in a parser directive. Hence, the following lines are all treated identically:

The following parser directives are supported:

Use the syntax parser directive to declare the Dockerfile syntax version to use for the build. If unspecified, BuildKit uses a bundled version of the Dockerfile frontend. Declaring a syntax version lets you automatically use the latest Dockerfile version without having to upgrade BuildKit or Docker Engine, or even use a custom Dockerfile implementation.

Most users will want to set this parser directive to docker/dockerfile:1 , which causes BuildKit to pull the latest stable version of the Dockerfile syntax before the build.

For more information about how the parser directive works, see Custom Dockerfile syntax .

The escape directive sets the character used to escape characters in a Dockerfile. If not specified, the default escape character is \ .

The escape character is used both to escape characters in a line, and to escape a newline. This allows a Dockerfile instruction to span multiple lines. Note that regardless of whether the escape parser directive is included in a Dockerfile, escaping is not performed in a RUN command, except at the end of a line.

Setting the escape character to ` is especially useful on Windows , where \ is the directory path separator. ` is consistent with Windows PowerShell .

Consider the following example which would fail in a non-obvious way on Windows. The second \ at the end of the second line would be interpreted as an escape for the newline, instead of a target of the escape from the first \ . Similarly, the \ at the end of the third line would, assuming it was actually handled as an instruction, cause it be treated as a line continuation. The result of this Dockerfile is that second and third lines are considered a single instruction:

Results in:

One solution to the above would be to use / as the target of both the COPY instruction, and dir . However, this syntax is, at best, confusing as it is not natural for paths on Windows, and at worst, error prone as not all commands on Windows support / as the path separator.

By adding the escape parser directive, the following Dockerfile succeeds as expected with the use of natural platform semantics for file paths on Windows:

Environment replacement

Environment variables (declared with the ENV statement ) can also be used in certain instructions as variables to be interpreted by the Dockerfile. Escapes are also handled for including variable-like syntax into a statement literally.

Environment variables are notated in the Dockerfile either with $variable_name or ${variable_name} . They are treated equivalently and the brace syntax is typically used to address issues with variable names with no whitespace, like ${foo}_bar .

The ${variable_name} syntax also supports a few of the standard bash modifiers as specified below:

  • ${variable:-word} indicates that if variable is set then the result will be that value. If variable is not set then word will be the result.
  • ${variable:+word} indicates that if variable is set then word will be the result, otherwise the result is the empty string.

The following variable replacements are supported in a pre-release version of Dockerfile syntax, when using the # syntax=docker/dockerfile-upstream:master syntax directive in your Dockerfile:

${variable#pattern} removes the shortest match of pattern from variable , seeking from the start of the string.

${variable##pattern} removes the longest match of pattern from variable , seeking from the start of the string.

${variable%pattern} removes the shortest match of pattern from variable , seeking backwards from the end of the string.

${variable%%pattern} removes the longest match of pattern from variable , seeking backwards from the end of the string.

${variable/pattern/replacement} replace the first occurrence of pattern in variable with replacement

${variable//pattern/replacement} replaces all occurrences of pattern in variable with replacement

In all cases, word can be any string, including additional environment variables.

pattern is a glob pattern where ? matches any single character and * any number of characters (including zero). To match literal ? and * , use a backslash escape: \? and \* .

You can escape whole variable names by adding a \ before the variable: \$foo or \${foo} , for example, will translate to $foo and ${foo} literals respectively.

Example (parsed representation is displayed after the # ):

Environment variables are supported by the following list of instructions in the Dockerfile:

  • ONBUILD (when combined with one of the supported instructions above)

You can also use environment variables with RUN , CMD , and ENTRYPOINT instructions, but in those cases the variable substitution is handled by the command shell, not the builder. Note that instructions using the exec form don't invoke a command shell automatically. See Variable substitution .

Environment variable substitution use the same value for each variable throughout the entire instruction. Changing the value of a variable only takes effect in subsequent instructions. Consider the following example:

  • The value of def becomes hello
  • The value of ghi becomes bye

.dockerignore file

You can use .dockerignore file to exclude files and directories from the build context. For more information, see .dockerignore file .

Shell and exec form

The RUN , CMD , and ENTRYPOINT instructions all have two possible forms:

  • INSTRUCTION ["executable","param1","param2"] (exec form)
  • INSTRUCTION command param1 param2 (shell form)

The exec form makes it possible to avoid shell string munging, and to invoke commands using a specific command shell, or any other executable. It uses a JSON array syntax, where each element in the array is a command, flag, or argument.

The shell form is more relaxed, and emphasizes ease of use, flexibility, and readability. The shell form automatically uses a command shell, whereas the exec form does not.

The exec form is parsed as a JSON array, which means that you must use double-quotes (") around words, not single-quotes (').

The exec form is best used to specify an ENTRYPOINT instruction, combined with CMD for setting default arguments that can be overridden at runtime. For more information, see ENTRYPOINT .

Variable substitution

Using the exec form doesn't automatically invoke a command shell. This means that normal shell processing, such as variable substitution, doesn't happen. For example, RUN [ "echo", "$HOME" ] won't handle variable substitution for $HOME .

If you want shell processing then either use the shell form or execute a shell directly with the exec form, for example: RUN [ "sh", "-c", "echo $HOME" ] . When using the exec form and executing a shell directly, as in the case for the shell form, it's the shell that's doing the environment variable substitution, not the builder.

Backslashes

In exec form, you must escape backslashes. This is particularly relevant on Windows where the backslash is the path separator. The following line would otherwise be treated as shell form due to not being valid JSON, and fail in an unexpected way:

The correct syntax for this example is:

Unlike the exec form, instructions using the shell form always use a command shell. The shell form doesn't use the JSON array format, instead it's a regular string. The shell form string lets you escape newlines using the escape character (backslash by default) to continue a single instruction onto the next line. This makes it easier to use with longer commands, because it lets you split them up into multiple lines. For example, consider these two lines:

They're equivalent to the following line:

You can also use heredocs with the shell form to break up a command:

For more information about heredocs, see Here-documents .

Use a different shell

You can change the default shell using the SHELL command. For example:

For more information, see SHELL .

The FROM instruction initializes a new build stage and sets the base image for subsequent instructions. As such, a valid Dockerfile must start with a FROM instruction. The image can be any valid image.

  • ARG is the only instruction that may precede FROM in the Dockerfile. See Understand how ARG and FROM interact .
  • FROM can appear multiple times within a single Dockerfile to create multiple images or use one build stage as a dependency for another. Simply make a note of the last image ID output by the commit before each new FROM instruction. Each FROM instruction clears any state created by previous instructions.
  • Optionally a name can be given to a new build stage by adding AS name to the FROM instruction. The name can be used in subsequent FROM and COPY --from=<name> instructions to refer to the image built in this stage.
  • The tag or digest values are optional. If you omit either of them, the builder assumes a latest tag by default. The builder returns an error if it can't find the tag value.

The optional --platform flag can be used to specify the platform of the image in case FROM references a multi-platform image. For example, linux/amd64 , linux/arm64 , or windows/amd64 . By default, the target platform of the build request is used. Global build arguments can be used in the value of this flag, for example automatic platform ARGs allow you to force a stage to native build platform ( --platform=$BUILDPLATFORM ), and use it to cross-compile to the target platform inside the stage.

Understand how ARG and FROM interact

FROM instructions support variables that are declared by any ARG instructions that occur before the first FROM .

An ARG declared before a FROM is outside of a build stage, so it can't be used in any instruction after a FROM . To use the default value of an ARG declared before the first FROM use an ARG instruction without a value inside of a build stage:

The RUN instruction will execute any commands to create a new layer on top of the current image. The added layer is used in the next step in the Dockerfile.

You can specify RUN instructions using shell or exec forms :

  • RUN ["executable","param1","param2"] (exec form)
  • RUN command param1 param2 (shell form)

The shell form is most commonly used, and lets you more easily break up longer instructions into multiple lines, either using newline escapes , or with heredocs :

Cache invalidation for RUN instructions

The cache for RUN instructions isn't invalidated automatically during the next build. The cache for an instruction like RUN apt-get dist-upgrade -y will be reused during the next build. The cache for RUN instructions can be invalidated by using the --no-cache flag, for example docker build --no-cache .

See the Dockerfile Best Practices guide for more information.

The cache for RUN instructions can be invalidated by ADD and COPY instructions.

RUN --mount

Note Added in docker/dockerfile:1.2

RUN --mount allows you to create filesystem mounts that the build can access. This can be used to:

  • Create bind mount to the host filesystem or other build stages
  • Access build secrets or ssh-agent sockets
  • Use a persistent package management cache to speed up your build

Syntax: --mount=[type=<TYPE>][,option=<value>[,option=<value>]...]

Mount types

Run --mount=type=bind.

This mount type allows binding files or directories to the build container. A bind mount is read-only by default.

RUN --mount=type=cache

This mount type allows the build container to cache directories for compilers and package managers.

Contents of the cache directories persists between builder invocations without invalidating the instruction cache. Cache mounts should only be used for better performance. Your build should work with any contents of the cache directory as another build may overwrite the files or GC may clean it if more storage space is needed.

Example: cache Go packages

Example: cache apt packages.

Apt needs exclusive access to its data, so the caches use the option sharing=locked , which will make sure multiple parallel builds using the same cache mount will wait for each other and not access the same cache files at the same time. You could also use sharing=private if you prefer to have each build create another cache directory in this case.

RUN --mount=type=tmpfs

This mount type allows mounting tmpfs in the build container.

RUN --mount=type=secret

This mount type allows the build container to access secure files such as private keys without baking them into the image.

Example: access to S3

Run --mount=type=ssh.

This mount type allows the build container to access SSH keys via SSH agents, with support for passphrases.

Example: access to Gitlab

You can also specify a path to *.pem file on the host directly instead of $SSH_AUTH_SOCK . However, pem files with passphrases are not supported.

RUN --network

Note Added in docker/dockerfile:1.1

RUN --network allows control over which networking environment the command is run in.

Syntax: --network=<TYPE>

Network types

Run --network=default.

Equivalent to not supplying a flag at all, the command is run in the default network for the build.

RUN --network=none

The command is run with no network access ( lo is still available, but is isolated to this process)

Example: isolating external effects

pip will only be able to install the packages provided in the tarfile, which can be controlled by an earlier build stage.

RUN --network=host

The command is run in the host's network environment (similar to docker build --network=host , but on a per-instruction basis)

Warning The use of --network=host is protected by the network.host entitlement, which needs to be enabled when starting the buildkitd daemon with --allow-insecure-entitlement network.host flag or in buildkitd config , and for a build request with --allow network.host flag .

RUN --security

Note Not yet available in stable syntax, use docker/dockerfile:1-labs version.

RUN --security=insecure

With --security=insecure , builder runs the command without sandbox in insecure mode, which allows to run flows requiring elevated privileges (e.g. containerd). This is equivalent to running docker run --privileged .

Warning In order to access this feature, entitlement security.insecure should be enabled when starting the buildkitd daemon with --allow-insecure-entitlement security.insecure flag or in buildkitd config , and for a build request with --allow security.insecure flag .

Example: check entitlements

Run --security=sandbox.

Default sandbox mode can be activated via --security=sandbox , but that is no-op.

The CMD instruction sets the command to be executed when running a container from an image.

You can specify CMD instructions using shell or exec forms :

  • CMD ["executable","param1","param2"] (exec form)
  • CMD ["param1","param2"] (exec form, as default parameters to ENTRYPOINT )
  • CMD command param1 param2 (shell form)

There can only be one CMD instruction in a Dockerfile. If you list more than one CMD , only the last one takes effect.

The purpose of a CMD is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well.

If you would like your container to run the same executable every time, then you should consider using ENTRYPOINT in combination with CMD . See ENTRYPOINT . If the user specifies arguments to docker run then they will override the default specified in CMD , but still use the default ENTRYPOINT .

If CMD is used to provide default arguments for the ENTRYPOINT instruction, both the CMD and ENTRYPOINT instructions should be specified in the exec form .

Note Don't confuse RUN with CMD . RUN actually runs a command and commits the result; CMD doesn't execute anything at build time, but specifies the intended command for the image.

The LABEL instruction adds metadata to an image. A LABEL is a key-value pair. To include spaces within a LABEL value, use quotes and backslashes as you would in command-line parsing. A few usage examples:

An image can have more than one label. You can specify multiple labels on a single line. Prior to Docker 1.10, this decreased the size of the final image, but this is no longer the case. You may still choose to specify multiple labels in a single instruction, in one of the following two ways:

Note Be sure to use double quotes and not single quotes. Particularly when you are using string interpolation (e.g. LABEL example="foo-$ENV_VAR" ), single quotes will take the string as is without unpacking the variable's value.

Labels included in base or parent images (images in the FROM line) are inherited by your image. If a label already exists but with a different value, the most-recently-applied value overrides any previously-set value.

To view an image's labels, use the docker image inspect command. You can use the --format option to show just the labels;

MAINTAINER (deprecated)

The MAINTAINER instruction sets the Author field of the generated images. The LABEL instruction is a much more flexible version of this and you should use it instead, as it enables setting any metadata you require, and can be viewed easily, for example with docker inspect . To set a label corresponding to the MAINTAINER field you could use:

This will then be visible from docker inspect with the other labels.

The EXPOSE instruction informs Docker that the container listens on the specified network ports at runtime. You can specify whether the port listens on TCP or UDP, and the default is TCP if you don't specify a protocol.

The EXPOSE instruction doesn't actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published. To publish the port when running the container, use the -p flag on docker run to publish and map one or more ports, or the -P flag to publish all exposed ports and map them to high-order ports.

By default, EXPOSE assumes TCP. You can also specify UDP:

To expose on both TCP and UDP, include two lines:

In this case, if you use -P with docker run , the port will be exposed once for TCP and once for UDP. Remember that -P uses an ephemeral high-ordered host port on the host, so TCP and UDP doesn't use the same port.

Regardless of the EXPOSE settings, you can override them at runtime by using the -p flag. For example

To set up port redirection on the host system, see using the -P flag . The docker network command supports creating networks for communication among containers without the need to expose or publish specific ports, because the containers connected to the network can communicate with each other over any port. For detailed information, see the overview of this feature .

The ENV instruction sets the environment variable <key> to the value <value> . This value will be in the environment for all subsequent instructions in the build stage and can be replaced inline in many as well. The value will be interpreted for other environment variables, so quote characters will be removed if they are not escaped. Like command line parsing, quotes and backslashes can be used to include spaces within values.

The ENV instruction allows for multiple <key>=<value> ... variables to be set at one time, and the example below will yield the same net results in the final image:

The environment variables set using ENV will persist when a container is run from the resulting image. You can view the values using docker inspect , and change them using docker run --env <key>=<value> .

A stage inherits any environment variables that were set using ENV by its parent stage or any ancestor. Refer here for more on multi-staged builds.

Environment variable persistence can cause unexpected side effects. For example, setting ENV DEBIAN_FRONTEND=noninteractive changes the behavior of apt-get , and may confuse users of your image.

If an environment variable is only needed during build, and not in the final image, consider setting a value for a single command instead:

Or using ARG , which is not persisted in the final image:

Alternative syntax The ENV instruction also allows an alternative syntax ENV <key> <value> , omitting the = . For example: copying = false, 2000);"> ENV MY_VAR my-value This syntax does not allow for multiple environment-variables to be set in a single ENV instruction, and can be confusing. For example, the following sets a single environment variable ( ONE ) with value "TWO= THREE=world" : copying = false, 2000);"> ENV ONE TWO = THREE = world The alternative syntax is supported for backward compatibility, but discouraged for the reasons outlined above, and may be removed in a future release.

ADD has two forms:

The latter form is required for paths containing whitespace.

Note The --chown and --chmod features are only supported on Dockerfiles used to build Linux containers, and doesn't work on Windows containers. Since user and group ownership concepts do not translate between Linux and Windows, the use of /etc/passwd and /etc/group for translating user and group names to IDs restricts this feature to only be viable for Linux OS-based containers.
Note --chmod is supported since Dockerfile 1.3 . Only octal notation is currently supported. Non-octal support is tracked in moby/buildkit#1951 .

The ADD instruction copies new files, directories or remote file URLs from <src> and adds them to the filesystem of the image at the path <dest> .

Multiple <src> resources may be specified but if they are files or directories, their paths are interpreted as relative to the source of the context of the build.

Each <src> may contain wildcards and matching will be done using Go's filepath.Match rules. For example:

To add all files starting with "hom":

In the example below, ? is replaced with any single character, e.g., "home.txt".

The <dest> is an absolute path, or a path relative to WORKDIR , into which the source will be copied inside the destination container.

The example below uses a relative path, and adds "test.txt" to <WORKDIR>/relativeDir/ :

Whereas this example uses an absolute path, and adds "test.txt" to /absoluteDir/

When adding files or directories that contain special characters (such as [ and ] ), you need to escape those paths following the Golang rules to prevent them from being treated as a matching pattern. For example, to add a file named arr[0].txt , use the following;

All new files and directories are created with a UID and GID of 0, unless the optional --chown flag specifies a given username, groupname, or UID/GID combination to request specific ownership of the content added. The format of the --chown flag allows for either username and groupname strings or direct integer UID and GID in any combination. Providing a username without groupname or a UID without GID will use the same numeric UID as the GID. If a username or groupname is provided, the container's root filesystem /etc/passwd and /etc/group files will be used to perform the translation from name to integer UID or GID respectively. The following examples show valid definitions for the --chown flag:

If the container root filesystem doesn't contain either /etc/passwd or /etc/group files and either user or group names are used in the --chown flag, the build will fail on the ADD operation. Using numeric IDs requires no lookup and doesn't depend on container root filesystem content.

In the case where <src> is a remote file URL, the destination will have permissions of 600. If the remote file being retrieved has an HTTP Last-Modified header, the timestamp from that header will be used to set the mtime on the destination file. However, like any other file processed during an ADD , mtime isn't included in the determination of whether or not the file has changed and the cache should be updated.

Note If you build by passing a Dockerfile through STDIN ( docker build - < somefile ), there is no build context, so the Dockerfile can only contain a URL based ADD instruction. You can also pass a compressed archive through STDIN: ( docker build - < archive.tar.gz ), the Dockerfile at the root of the archive and the rest of the archive will be used as the context of the build.

If your URL files are protected using authentication, you need to use RUN wget , RUN curl or use another tool from within the container as the ADD instruction doesn't support authentication.

Note The first encountered ADD instruction will invalidate the cache for all following instructions from the Dockerfile if the contents of <src> have changed. This includes invalidating the cache for RUN instructions. See the Dockerfile Best Practices guide – Leverage build cache for more information.

ADD obeys the following rules:

The <src> path must be inside the build context; you can't use ADD ../something /something , because the builder can only access files from the context, and ../something specifies a parent file or directory of the build context root.

If <src> is a URL and <dest> does end with a trailing slash, then the filename is inferred from the URL and the file is downloaded to <dest>/<filename> . For instance, ADD http://example.com/foobar / would create the file /foobar . The URL must have a nontrivial path so that an appropriate filename can be discovered in this case ( http://example.com doesn't work).

If <src> is a directory, the entire contents of the directory are copied, including filesystem metadata.

Note The directory itself isn't copied, only its contents.

If <src> is a local tar archive in a recognized compression format ( identity , gzip , bzip2 or xz ) then it's unpacked as a directory. Resources from remote URLs aren't decompressed. When a directory is copied or unpacked, it has the same behavior as tar -x . The result is the union of:

  • Whatever existed at the destination path and
  • The contents of the source tree, with conflicts resolved in favor of "2." on a file-by-file basis.
Note Whether a file is identified as a recognized compression format or not is done solely based on the contents of the file, not the name of the file. For example, if an empty file happens to end with .tar.gz this isn't recognized as a compressed file and doesn't generate any kind of decompression error message, rather the file will simply be copied to the destination.

If <src> is any other kind of file, it's copied individually along with its metadata. In this case, if <dest> ends with a trailing slash / , it will be considered a directory and the contents of <src> will be written at <dest>/base(<src>) .

If multiple <src> resources are specified, either directly or due to the use of a wildcard, then <dest> must be a directory, and it must end with a slash / .

If <src> is a file, and <dest> doesn't end with a trailing slash, the contents of <src> will be written as filename <dest> .

If <dest> doesn't exist, it's created, along with all missing directories in its path.

Verifying a remote file checksum ADD --checksum=<checksum> <http src> <dest>

The checksum of a remote file can be verified with the --checksum flag:

The --checksum flag only supports HTTP sources currently.

Adding a Git repository ADD <git ref> <dir>

This form allows adding a Git repository to an image directly, without using the git command inside the image:

The --keep-git-dir=true flag adds the .git directory. This flag defaults to false.

Adding a private git repository

To add a private repo via SSH, create a Dockerfile with the following form:

This Dockerfile can be built with docker build --ssh or buildctl build --ssh , e.g.,

See COPY --link .

COPY has two forms:

This latter form is required for paths containing whitespace

The COPY instruction copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest> .

Multiple <src> resources may be specified but the paths of files and directories will be interpreted as relative to the source of the context of the build.

When copying files or directories that contain special characters (such as [ and ] ), you need to escape those paths following the Golang rules to prevent them from being treated as a matching pattern. For example, to copy a file named arr[0].txt , use the following;

All new files and directories are created with a UID and GID of 0, unless the optional --chown flag specifies a given username, groupname, or UID/GID combination to request specific ownership of the copied content. The format of the --chown flag allows for either username and groupname strings or direct integer UID and GID in any combination. Providing a username without groupname or a UID without GID will use the same numeric UID as the GID. If a username or groupname is provided, the container's root filesystem /etc/passwd and /etc/group files will be used to perform the translation from name to integer UID or GID respectively. The following examples show valid definitions for the --chown flag:

If the container root filesystem doesn't contain either /etc/passwd or /etc/group files and either user or group names are used in the --chown flag, the build will fail on the COPY operation. Using numeric IDs requires no lookup and does not depend on container root filesystem content.

Note If you build using STDIN ( docker build - < somefile ), there is no build context, so COPY can't be used.

Optionally COPY accepts a flag --from=<name> that can be used to set the source location to a previous build stage (created with FROM .. AS <name> ) that will be used instead of a build context sent by the user. In case a build stage with a specified name can't be found an image with the same name is attempted to be used instead.

COPY obeys the following rules:

The <src> path must be inside the build context; you can't use COPY ../something /something , because the builder can only access files from the context, and ../something specifies a parent file or directory of the build context root.

Note The first encountered COPY instruction will invalidate the cache for all following instructions from the Dockerfile if the contents of <src> have changed. This includes invalidating the cache for RUN instructions. See the Dockerfile Best Practices guide – Leverage build cache for more information.

COPY --link

Note Added in docker/dockerfile:1.4

Enabling this flag in COPY or ADD commands allows you to copy files with enhanced semantics where your files remain independent on their own layer and don't get invalidated when commands on previous layers are changed.

When --link is used your source files are copied into an empty destination directory. That directory is turned into a layer that is linked on top of your previous state.

Is equivalent of doing two builds:

and merging all the layers of both images together.

Benefits of using --link

Use --link to reuse already built layers in subsequent builds with --cache-from even if the previous layers have changed. This is especially important for multi-stage builds where a COPY --from statement would previously get invalidated if any previous commands in the same stage changed, causing the need to rebuild the intermediate stages again. With --link the layer the previous build generated is reused and merged on top of the new layers. This also means you can easily rebase your images when the base images receive updates, without having to execute the whole build again. In backends that support it, BuildKit can do this rebase action without the need to push or pull any layers between the client and the registry. BuildKit will detect this case and only create new image manifest that contains the new layers and old layers in correct order.

The same behavior where BuildKit can avoid pulling down the base image can also happen when using --link and no other commands that would require access to the files in the base image. In that case BuildKit will only build the layers for the COPY commands and push them to the registry directly on top of the layers of the base image.

Incompatibilities with --link=false

When using --link the COPY/ADD commands are not allowed to read any files from the previous state. This means that if in previous state the destination directory was a path that contained a symlink, COPY/ADD can not follow it. In the final image the destination path created with --link will always be a path containing only directories.

If you don't rely on the behavior of following symlinks in the destination path, using --link is always recommended. The performance of --link is equivalent or better than the default behavior and, it creates much better conditions for cache reuse.

COPY --parents

Note Available in docker/dockerfile-upstream:master-labs . Will be included in docker/dockerfile:1.6-labs .

The --parents flag preserves parent directories for src entries. This flag defaults to false .

This behavior is analogous to the Linux cp utility's --parents flag.

Note that, without the --parents flag specified, any filename collision will fail the Linux cp operation with an explicit error message ( cp: will not overwrite just-created './x/a.txt' with './y/a.txt' ), where the Buildkit will silently overwrite the target file at the destination.

While it is possible to preserve the directory structure for COPY instructions consisting of only one src entry, usually it is more beneficial to keep the layer count in the resulting image as low as possible. Therefore, with the --parents flag, the Buildkit is capable of packing multiple COPY instructions together, keeping the directory structure intact.

An ENTRYPOINT allows you to configure a container that will run as an executable.

ENTRYPOINT has two possible forms:

The exec form, which is the preferred form:

The shell form:

For more information about the different forms, see Shell and exec form .

The following command starts a container from the nginx with its default content, listening on port 80:

Command line arguments to docker run <image> will be appended after all elements in an exec form ENTRYPOINT , and will override all elements specified using CMD .

This allows arguments to be passed to the entry point, i.e., docker run <image> -d will pass the -d argument to the entry point. You can override the ENTRYPOINT instruction using the docker run --entrypoint flag.

The shell form of ENTRYPOINT prevents any CMD command line arguments from being used. It also starts your ENTRYPOINT as a subcommand of /bin/sh -c , which does not pass signals. This means that the executable will not be the container's PID 1 , and will not receive Unix signals. In this case, your executable doesn't receive a SIGTERM from docker stop <container> .

Only the last ENTRYPOINT instruction in the Dockerfile will have an effect.

Exec form ENTRYPOINT example

You can use the exec form of ENTRYPOINT to set fairly stable default commands and arguments and then use either form of CMD to set additional defaults that are more likely to be changed.

When you run the container, you can see that top is the only process:

To examine the result further, you can use docker exec :

And you can gracefully request top to shut down using docker stop test .

The following Dockerfile shows using the ENTRYPOINT to run Apache in the foreground (i.e., as PID 1 ):

If you need to write a starter script for a single executable, you can ensure that the final executable receives the Unix signals by using exec and gosu commands:

Lastly, if you need to do some extra cleanup (or communicate with other containers) on shutdown, or are co-ordinating more than one executable, you may need to ensure that the ENTRYPOINT script receives the Unix signals, passes them on, and then does some more work:

If you run this image with docker run -it --rm -p 80:80 --name test apache , you can then examine the container's processes with docker exec , or docker top , and then ask the script to stop Apache:

Note You can override the ENTRYPOINT setting using --entrypoint , but this can only set the binary to exec (no sh -c will be used).

Shell form ENTRYPOINT example

You can specify a plain string for the ENTRYPOINT and it will execute in /bin/sh -c . This form will use shell processing to substitute shell environment variables, and will ignore any CMD or docker run command line arguments. To ensure that docker stop will signal any long running ENTRYPOINT executable correctly, you need to remember to start it with exec :

When you run this image, you'll see the single PID 1 process:

Which exits cleanly on docker stop :

If you forget to add exec to the beginning of your ENTRYPOINT :

You can then run it (giving it a name for the next step):

You can see from the output of top that the specified ENTRYPOINT is not PID 1 .

If you then run docker stop test , the container will not exit cleanly - the stop command will be forced to send a SIGKILL after the timeout:

Understand how CMD and ENTRYPOINT interact

Both CMD and ENTRYPOINT instructions define what command gets executed when running a container. There are few rules that describe their co-operation.

Dockerfile should specify at least one of CMD or ENTRYPOINT commands.

ENTRYPOINT should be defined when using the container as an executable.

CMD should be used as a way of defining default arguments for an ENTRYPOINT command or for executing an ad-hoc command in a container.

CMD will be overridden when running the container with alternative arguments.

The table below shows what command is executed for different ENTRYPOINT / CMD combinations:

Note If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.

The VOLUME instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers. The value can be a JSON array, VOLUME ["/var/log/"] , or a plain string with multiple arguments, such as VOLUME /var/log or VOLUME /var/log /var/db . For more information/examples and mounting instructions via the Docker client, refer to Share Directories via Volumes documentation.

The docker run command initializes the newly created volume with any data that exists at the specified location within the base image. For example, consider the following Dockerfile snippet:

This Dockerfile results in an image that causes docker run to create a new mount point at /myvol and copy the greeting file into the newly created volume.

Notes about specifying volumes

Keep the following things in mind about volumes in the Dockerfile.

Volumes on Windows-based containers : When using Windows-based containers, the destination of a volume inside the container must be one of:

  • a non-existing or empty directory
  • a drive other than C:

Changing the volume from within the Dockerfile : If any build steps change the data within the volume after it has been declared, those changes will be discarded.

JSON formatting : The list is parsed as a JSON array. You must enclose words with double quotes ( " ) rather than single quotes ( ' ).

The host directory is declared at container run-time : The host directory (the mountpoint) is, by its nature, host-dependent. This is to preserve image portability, since a given host directory can't be guaranteed to be available on all hosts. For this reason, you can't mount a host directory from within the Dockerfile. The VOLUME instruction does not support specifying a host-dir parameter. You must specify the mountpoint when you create or run the container.

The USER instruction sets the user name (or UID) and optionally the user group (or GID) to use as the default user and group for the remainder of the current stage. The specified user is used for RUN instructions and at runtime, runs the relevant ENTRYPOINT and CMD commands.

Note that when specifying a group for the user, the user will have only the specified group membership. Any other configured group memberships will be ignored.
Warning When the user doesn't have a primary group then the image (or the next instructions) will be run with the root group. On Windows, the user must be created first if it's not a built-in account. This can be done with the net user command called as part of a Dockerfile.

The WORKDIR instruction sets the working directory for any RUN , CMD , ENTRYPOINT , COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn't exist, it will be created even if it's not used in any subsequent Dockerfile instruction.

The WORKDIR instruction can be used multiple times in a Dockerfile. If a relative path is provided, it will be relative to the path of the previous WORKDIR instruction. For example:

The output of the final pwd command in this Dockerfile would be /a/b/c .

The WORKDIR instruction can resolve environment variables previously set using ENV . You can only use environment variables explicitly set in the Dockerfile. For example:

The output of the final pwd command in this Dockerfile would be /path/$DIRNAME

If not specified, the default working directory is / . In practice, if you aren't building a Dockerfile from scratch ( FROM scratch ), the WORKDIR may likely be set by the base image you're using.

Therefore, to avoid unintended operations in unknown directories, it's best practice to set your WORKDIR explicitly.

The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag.

Warning It isn't recommended to use build arguments for passing secrets such as user credentials, API tokens, etc. Build arguments are visible in the docker history command and in max mode provenance attestations, which are attached to the image by default if you use the Buildx GitHub Actions and your GitHub repository is public. Refer to the RUN --mount=type=secret section to learn about secure ways to use secrets when building images.

If you specify a build argument that wasn't defined in the Dockerfile, the build outputs a warning.

A Dockerfile may include one or more ARG instructions. For example, the following is a valid Dockerfile:

Default values

An ARG instruction can optionally include a default value:

If an ARG instruction has a default value and if there is no value passed at build-time, the builder uses the default.

An ARG variable definition comes into effect from the line on which it is defined in the Dockerfile not from the argument's use on the command-line or elsewhere. For example, consider this Dockerfile:

A user builds this file by calling:

The USER at line 2 evaluates to some_user as the username variable is defined on the subsequent line 3. The USER at line 4 evaluates to what_user , as the username argument is defined and the what_user value was passed on the command line. Prior to its definition by an ARG instruction, any use of a variable results in an empty string.

An ARG instruction goes out of scope at the end of the build stage where it was defined. To use an argument in multiple stages, each stage must include the ARG instruction.

Using ARG variables

You can use an ARG or an ENV instruction to specify variables that are available to the RUN instruction. Environment variables defined using the ENV instruction always override an ARG instruction of the same name. Consider this Dockerfile with an ENV and ARG instruction.

Then, assume this image is built with this command:

In this case, the RUN instruction uses v1.0.0 instead of the ARG setting passed by the user: v2.0.1 This behavior is similar to a shell script where a locally scoped variable overrides the variables passed as arguments or inherited from environment, from its point of definition.

Using the example above but a different ENV specification you can create more useful interactions between ARG and ENV instructions:

Unlike an ARG instruction, ENV values are always persisted in the built image. Consider a docker build without the --build-arg flag:

Using this Dockerfile example, CONT_IMG_VER is still persisted in the image but its value would be v1.0.0 as it is the default set in line 3 by the ENV instruction.

The variable expansion technique in this example allows you to pass arguments from the command line and persist them in the final image by leveraging the ENV instruction. Variable expansion is only supported for a limited set of Dockerfile instructions.

Predefined ARGs

Docker has a set of predefined ARG variables that you can use without a corresponding ARG instruction in the Dockerfile.

  • HTTPS_PROXY
  • https_proxy

To use these, pass them on the command line using the --build-arg flag, for example:

By default, these pre-defined variables are excluded from the output of docker history . Excluding them reduces the risk of accidentally leaking sensitive authentication information in an HTTP_PROXY variable.

For example, consider building the following Dockerfile using --build-arg HTTP_PROXY=http://user:[email protected]

In this case, the value of the HTTP_PROXY variable is not available in the docker history and is not cached. If you were to change location, and your proxy server changed to http://user:[email protected] , a subsequent build does not result in a cache miss.

If you need to override this behaviour then you may do so by adding an ARG statement in the Dockerfile as follows:

When building this Dockerfile, the HTTP_PROXY is preserved in the docker history , and changing its value invalidates the build cache.

Automatic platform ARGs in the global scope

This feature is only available when using the BuildKit backend.

BuildKit supports a predefined set of ARG variables with information on the platform of the node performing the build (build platform) and on the platform of the resulting image (target platform). The target platform can be specified with the --platform flag on docker build .

The following ARG variables are set automatically:

  • TARGETPLATFORM - platform of the build result. Eg linux/amd64 , linux/arm/v7 , windows/amd64 .
  • TARGETOS - OS component of TARGETPLATFORM
  • TARGETARCH - architecture component of TARGETPLATFORM
  • TARGETVARIANT - variant component of TARGETPLATFORM
  • BUILDPLATFORM - platform of the node performing the build.
  • BUILDOS - OS component of BUILDPLATFORM
  • BUILDARCH - architecture component of BUILDPLATFORM
  • BUILDVARIANT - variant component of BUILDPLATFORM

These arguments are defined in the global scope so are not automatically available inside build stages or for your RUN commands. To expose one of these arguments inside the build stage redefine it without value.

For example:

BuildKit built-in build args

Example: keep .git dir.

When using a Git context, .git dir is not kept on checkouts. It can be useful to keep it around if you want to retrieve git information during your build:

Impact on build caching

ARG variables are not persisted into the built image as ENV variables are. However, ARG variables do impact the build cache in similar ways. If a Dockerfile defines an ARG variable whose value is different from a previous build, then a "cache miss" occurs upon its first usage, not its definition. In particular, all RUN instructions following an ARG instruction use the ARG variable implicitly (as an environment variable), thus can cause a cache miss. All predefined ARG variables are exempt from caching unless there is a matching ARG statement in the Dockerfile.

For example, consider these two Dockerfile:

If you specify --build-arg CONT_IMG_VER=<value> on the command line, in both cases, the specification on line 2 doesn't cause a cache miss; line 3 does cause a cache miss. ARG CONT_IMG_VER causes the RUN line to be identified as the same as running CONT_IMG_VER=<value> echo hello , so if the <value> changes, you get a cache miss.

Consider another example under the same command line:

In this example, the cache miss occurs on line 3. The miss happens because the variable's value in the ENV references the ARG variable and that variable is changed through the command line. In this example, the ENV command causes the image to include the value.

If an ENV instruction overrides an ARG instruction of the same name, like this Dockerfile:

Line 3 doesn't cause a cache miss because the value of CONT_IMG_VER is a constant ( hello ). As a result, the environment variables and values used on the RUN (line 4) doesn't change between builds.

The ONBUILD instruction adds to the image a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile.

Any build instruction can be registered as a trigger.

This is useful if you are building an image which will be used as a base to build other images, for example an application build environment or a daemon which may be customized with user-specific configuration.

For example, if your image is a reusable Python application builder, it will require application source code to be added in a particular directory, and it might require a build script to be called after that. You can't just call ADD and RUN now, because you don't yet have access to the application source code, and it will be different for each application build. You could simply provide application developers with a boilerplate Dockerfile to copy-paste into their application, but that's inefficient, error-prone and difficult to update because it mixes with application-specific code.

The solution is to use ONBUILD to register advance instructions to run later, during the next build stage.

Here's how it works:

  • When it encounters an ONBUILD instruction, the builder adds a trigger to the metadata of the image being built. The instruction doesn't otherwise affect the current build.
  • At the end of the build, a list of all triggers is stored in the image manifest, under the key OnBuild . They can be inspected with the docker inspect command.
  • Later the image may be used as a base for a new build, using the FROM instruction. As part of processing the FROM instruction, the downstream builder looks for ONBUILD triggers, and executes them in the same order they were registered. If any of the triggers fail, the FROM instruction is aborted which in turn causes the build to fail. If all triggers succeed, the FROM instruction completes and the build continues as usual.
  • Triggers are cleared from the final image after being executed. In other words they aren't inherited by "grand-children" builds.

For example you might add something like this:

Warning Chaining ONBUILD instructions using ONBUILD ONBUILD isn't allowed.
Warning The ONBUILD instruction may not trigger FROM or MAINTAINER instructions.

The STOPSIGNAL instruction sets the system call signal that will be sent to the container to exit. This signal can be a signal name in the format SIG<NAME> , for instance SIGKILL , or an unsigned number that matches a position in the kernel's syscall table, for instance 9 . The default is SIGTERM if not defined.

The image's default stopsignal can be overridden per container, using the --stop-signal flag on docker run and docker create .

HEALTHCHECK

The HEALTHCHECK instruction has two forms:

  • HEALTHCHECK [OPTIONS] CMD command (check container health by running a command inside the container)
  • HEALTHCHECK NONE (disable any healthcheck inherited from the base image)

The HEALTHCHECK instruction tells Docker how to test a container to check that it's still working. This can detect cases such as a web server stuck in an infinite loop and unable to handle new connections, even though the server process is still running.

When a container has a healthcheck specified, it has a health status in addition to its normal status. This status is initially starting . Whenever a health check passes, it becomes healthy (whatever state it was previously in). After a certain number of consecutive failures, it becomes unhealthy .

The options that can appear before CMD are:

  • --interval=DURATION (default: 30s )
  • --timeout=DURATION (default: 30s )
  • --start-period=DURATION (default: 0s )
  • --start-interval=DURATION (default: 5s )
  • --retries=N (default: 3 )

The health check will first run interval seconds after the container is started, and then again interval seconds after each previous check completes.

If a single run of the check takes longer than timeout seconds then the check is considered to have failed.

It takes retries consecutive failures of the health check for the container to be considered unhealthy .

start period provides initialization time for containers that need time to bootstrap. Probe failure during that period will not be counted towards the maximum number of retries. However, if a health check succeeds during the start period, the container is considered started and all consecutive failures will be counted towards the maximum number of retries.

start interval is the time between health checks during the start period. This option requires Docker Engine version 25.0 or later.

There can only be one HEALTHCHECK instruction in a Dockerfile. If you list more than one then only the last HEALTHCHECK will take effect.

The command after the CMD keyword can be either a shell command (e.g. HEALTHCHECK CMD /bin/check-running ) or an exec array (as with other Dockerfile commands; see e.g. ENTRYPOINT for details).

The command's exit status indicates the health status of the container. The possible values are:

  • 0: success - the container is healthy and ready for use
  • 1: unhealthy - the container isn't working correctly
  • 2: reserved - don't use this exit code

For example, to check every five minutes or so that a web-server is able to serve the site's main page within three seconds:

To help debug failing probes, any output text (UTF-8 encoded) that the command writes on stdout or stderr will be stored in the health status and can be queried with docker inspect . Such output should be kept short (only the first 4096 bytes are stored currently).

When the health status of a container changes, a health_status event is generated with the new status.

The SHELL instruction allows the default shell used for the shell form of commands to be overridden. The default shell on Linux is ["/bin/sh", "-c"] , and on Windows is ["cmd", "/S", "/C"] . The SHELL instruction must be written in JSON form in a Dockerfile.

The SHELL instruction is particularly useful on Windows where there are two commonly used and quite different native shells: cmd and powershell , as well as alternate shells available including sh .

The SHELL instruction can appear multiple times. Each SHELL instruction overrides all previous SHELL instructions, and affects all subsequent instructions. For example:

The following instructions can be affected by the SHELL instruction when the shell form of them is used in a Dockerfile: RUN , CMD and ENTRYPOINT .

The following example is a common pattern found on Windows which can be streamlined by using the SHELL instruction:

The command invoked by the builder will be:

This is inefficient for two reasons. First, there is an unnecessary cmd.exe command processor (aka shell) being invoked. Second, each RUN instruction in the shell form requires an extra powershell -command prefixing the command.

To make this more efficient, one of two mechanisms can be employed. One is to use the JSON form of the RUN command such as:

While the JSON form is unambiguous and does not use the unnecessary cmd.exe , it does require more verbosity through double-quoting and escaping. The alternate mechanism is to use the SHELL instruction and the shell form, making a more natural syntax for Windows users, especially when combined with the escape parser directive:

Resulting in:

The SHELL instruction could also be used to modify the way in which a shell operates. For example, using SHELL cmd /S /C /V:ON|OFF on Windows, delayed environment variable expansion semantics could be modified.

The SHELL instruction can also be used on Linux should an alternate shell be required such as zsh , csh , tcsh and others.

Here-Documents

Here-documents allow redirection of subsequent Dockerfile lines to the input of RUN or COPY commands. If such command contains a here-document the Dockerfile considers the next lines until the line only containing a here-doc delimiter as part of the same command.

Example: Running a multi-line script

If the command only contains a here-document, its contents is evaluated with the default shell.

Alternatively, shebang header can be used to define an interpreter.

More complex examples may use multiple here-documents.

Example: Creating inline files

With COPY instructions, you can replace the source parameter with a here-doc indicator to write the contents of the here-document directly to a file. The following example creates a greeting.txt file containing hello world using a COPY instruction.

Regular here-doc variable expansion and tab stripping rules apply. The following example shows a small Dockerfile that creates a hello.sh script file using a COPY instruction with a here-document.

In this case, file script prints "hello bar", because the variable is expanded when the COPY instruction gets executed.

If instead you were to quote any part of the here-document word EOT , the variable would not be expanded at build-time.

Note that ARG FOO=bar is excessive here, and can be removed. The variable gets interpreted at runtime, when the script is invoked:

Dockerfile examples

For examples of Dockerfiles, refer to:

  • The "build images" section
  • The "get started" tutorial
  • The language-specific getting started guides
  • The build guide

Value required  ↩︎   ↩︎   ↩︎

For Docker-integrated BuildKit and docker buildx build   ↩︎

IMAGES

  1. [Solved] How to assign a multiple line value to a bash

    bash assign multiple lines to variable

  2. How to assign a string with multiple spaces to a variable in bash? (3

    bash assign multiple lines to variable

  3. Bash Function & How to Use It {Variables, Arguments, Return}

    bash assign multiple lines to variable

  4. Append multiple lines, specified as verbatim bash variable, after a

    bash assign multiple lines to variable

  5. How to Assign Variable in Bash

    bash assign multiple lines to variable

  6. How to use Variables in Bash

    bash assign multiple lines to variable

VIDEO

  1. Topic 6 Assign Multiple Values

  2. Section-5: Vidoe-10 : Command Line Arguments to provide inputs for Variables of Bash Shell Script

  3. Assign multiple values to variables #shorts #python #pythonforbeginners #programming #coding

  4. Imp Points To Develop Real-Time Bash Shell Scripts

COMMENTS

  1. How to assign a string value to a variable over multiple lines while

    91 The issue: I need to assign a variable a value that is decently long. All the lines of my script must be under a certain number of columns. So, I am trying to assign it using more than one line. It's simple to do without indents: VAR="This displays without \ any issues." echo "${VAR}" Result: This displays without any issues.

  2. Capturing multiple line output into a Bash variable

    Capturing multiple line output into a Bash variable Ask Question Asked 14 years, 11 months ago Modified 1 year ago Viewed 358k times 690 I've got a script 'myscript' that outputs the following: abc def ghi in another script, I call: declare RESULT=$(./myscript) and $RESULT gets the value abc def ghi

  3. How to assign a multiple line value to a bash variable

    How to assign a multiple line value to a bash variable Ask Question Asked 7 years, 7 months ago Modified 4 years, 6 months ago Viewed 36k times 15 I have a variable FOO with me that needs to be assigned with a value that will be multiple lines. Something like this, FOO="This is line 1 This is line 2 This is line 3"

  4. How to assign a multiple line string value to a variable with

    1 I want to assign the following multiple line string value to a variable in a shell script with the the exact indentations and lines. Usage: ServiceAccountName LogFile Where: ServiceAccountName - credentials being requested. LogFile - Name of the log file

  5. Linux Bash: Multiple Variable Assignment

    1. Overview Assigning multiple variables in a single line of code is a handy feature in some programming languages, such as Python and PHP. In this quick tutorial, we'll take a closer look at how to do multiple variable assignment in Bash scripts. 2. Multiple Variable Assignment

  6. Assign variable using multiple lines

    bash - Assign variable using multiple lines - Unix & Linux Stack Exchange Assign variable using multiple lines Ask Question Asked 5 years, 3 months ago Modified 4 years, 10 months ago Viewed 2k times 4 I have a function f () { echo 777 } and a variable to which I assign the "return value" of the function. x=$ (f) Very concise!

  7. Bash: Iterating over lines in a variable

    How does one properly iterate over lines in bash either in a variable, or from the output of a command? Simply setting the IFS variable to a new line works for the output of a command but not when processing a variable that contains new lines. For example #!/bin/bash list="One\ntwo\nthree\nfour" #Print the list with echo echo -e "echo: \n$list"

  8. Inserting a Newline in a Variable in Bash

    We can imagine cases where we need to insert newlines into a variable. For example, for any tasks that require text formatting, generate a list of commands to run. For this reason, in this tutorial, we'll illustrate examples of how to insert newlines to a variable in bash. 2. Displaying Already Formatted Text

  9. bash

    134 I need to write some complex xml to a variable inside a bash script. The xml needs to be readable inside the bash script as this is where the xml fragment will live, it's not being read from another file or source.

  10. Bash Multi-Line Strings: Methods and Best Practices

    In this example, we use the read command with the -d option (which changes the delimiter from a newline to a null character) and a Here String to read multiple lines of input into a variable. The echo command then prints the multiline string. While Here Documents and Here Strings are powerful, they have some potential pitfalls.

  11. How to Assign Variable in Bash Script? [8 Practical Cases]

    In Bash scripts, variable assignment follows a straightforward syntax, but it offers a range of options and features that can enhance the flexibility and functionality of your scripts. In this article, I will discuss modes to assign variable in the Bash script.

  12. How to break a long string into multiple lines assigned to a variable

    9 I am working towards writing a bash script that contains a variable with a long string value. When I split the string into multiple lines it is throwing error. How to split the string into multiple lines and assigned to a variable? linux bash variable text-formatting Share Improve this question Follow edited Feb 28, 2019 at 17:15 Rui F Ribeiro

  13. How to Work with Variables in Bash

    Here, we'll create five variables. The format is to type the name, the equals sign =, and the value. Note there isn't a space before or after the equals sign. Giving a variable a value is often referred to as assigning a value to the variable. We'll create four string variables and one numeric variable, my_name=Dave.

  14. How to add newlines into variables in bash script

    9 Answers Sorted by: 137 In bash you can use the syntax str=$'Hello World\n===========\n' Single quotes preceded by a $ is a new syntax that allows to insert escape sequences in strings. Also printf builtin allows to save the resulting output to a variable printf -v str 'Hello World\n===========\n' Both solutions do not require a subshell.

  15. How to Create Multi-Line String in Bash

    Use here-document to Make Multi-Line String in Bash. Here-document provides an interactive way to input multi-line string into a file. The EOF is known as the Here Tag.The Here Tag tells the shell that you will input a multi-line string until the Here Tag since it acts as a delimiter. The << is used to set the Here Tag.The > is used for input redirection. It redirects the input to the ...

  16. sh

    You asked, "How can I do such multiline assignment in shell", but the assignment in your example is actually a SINGLE line, with the ^ at the end of each input line negating the following newline ( not escaping it, as another answer suggested).

  17. How to write multiple line strings using Bash with variables on Linux?

    Linux Operating System Open Source. Setting a variable to a single line in bash and then printing it to console is a fairly easy process, but if we want to write multiple line strings using Bash then we have to consider different approaches. In total there are three approaches that we can make use of, all of these are mentioned below with examples.

  18. Dockerfile reference

    Using this Dockerfile example, CONT_IMG_VER is still persisted in the image but its value would be v1.0.0 as it is the default set in line 3 by the ENV instruction. The variable expansion technique in this example allows you to pass arguments from the command line and persist them in the final image by leveraging the ENV instruction.

  19. how to redirect multiline bash command into a variable

    3 Answers Sorted by: 7 Just: variable=$ ( { printf 'scan on\n\n' sleep 10 printf 'quit \n\n' } | bluetoothctl ) The inside of command substitutions can be any shell code¹ and doesn't have to be on one line.

  20. bash

    2 Answers Sorted by: 12 The wrong way to do this, but exactly what you asked for, using discrete variables: while IFS= read -r line; do printf -v "line$ ( ( ++i ))" '%s' "$line" done <Testfile echo "$line1" # to demonstrate use of array values echo "$line2" The right way, using an array, for bash 4.0 or newer:

  21. bash: Assigning the first line of a variable to a variable

    bash: Assigning the first line of a variable to a variable - Unix & Linux Stack Exchange bash: Assigning the first line of a variable to a variable Ask Question Asked 8 years, 8 months ago Modified 2 years, 10 months ago Viewed 47k times 24 I have a multiline variable, and I only want the first line in that variable.