Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

13 Chapter 13: Last Step of Four Step Modeling (Trip Assignment Models)

Chapter 13 is the last chapter of the book unpacking the last step of four-step travel demand modeling, i.e., trip assignment. This step determines which paths travelers choose for moving between each pair of zones. Additionally, this step can yield numerous results such as traffic volumes in different transportation corridors, the patterns of vehicular movements, total VMTs and VTTs in the network, and zone-to-zone travel costs. The identification of the heavily congested links is crucial for transportation planning and engineering practitioners. This chapter begins with some fundamental concepts, such as the link cost functions. Next, it presents some common and useful trip assignment methods with relevant examples. The methods covered in this chapter include all-or-nothing (AON), user equilibrium (UE), system optimum (SO), feedback loop between distribution and assignment (LDA), incremental increase assignment, capacity restrained assignment, and stochastic user equilibrium assignment.

Learning Objectives

Student Learning Outcomes

  •  Describe the reasons for performing trip assignment models in FSM and relate these models’ foundation through the cost-function concept.
  • Compare static and dynamic trip assignment models and infer the appropriateness of each model for different situations.
  • Explain Wardrop principles and relate them to traffic assignment algorithms.
  • Complete simple network traffic assignment models using static models such as the all-or-nothing and user equilibrium models.
  • Solve modal split analyses manually for small samples using the discrete choice modeling framework and multinominal logit models.

Prep/quiz/assessments

  • Explain what the link performance function is in trip assignment models and how it is related to link capacity.
  • Name a few static and dynamic traffic assignment models and discuss how different their rules or algorithms are.
  • How does stochastic decision-making on route choice affect the transportation level of service, and how it is incorporated into traffic assignment problems?
  • Name one extension of the all-or-nothing assignment model and explain how this extension improves the model results.

13.1 Introduction

in this chapter, we continue the discussion  about FSM and elaborate on  different methods of traffic assignment, which is the last step in the FSM model after trip generation, trip distribution, and modal split. The traffic assignment step, which is also called route assignment or route choice , simulates the choice of route selection from a set of alternatives between origin zone and the destination zone (Levinson et al., 2014). After the first three steps, we know the number of trips produced between each pair of zones and what portion of these trips are completed by different transportation modes. As the final step, we would be also interested in determining what routes or links within our study areas will likely be used. For instance, in a Regional Transportation Plan (RTP), we would be interested in determining how much shift or diversion in daily traffic happens if we introduce an additional transit line or extent a highway corridor (Levinson et al., 2014). Similar to trip distribution, the impedance function has an important role in route choice for travelers. Normally, the impedance function is related to travel cost or travel time. The longer the trip or the higher the cost, the larger the impedance for the trip along that path (Wang & Hofe, 2008).

The output from the last step of the FSM model can provide modelers with numerous valuable results. Such results can yield the planner an insight of good and bad characteristics of various preconceived plans. The results of trip assignment analysis can be:

  • The traffic flows in the transportation system andthe pattern of vehicular movements
  • Volume of traffic on network links
  • Travel costs between trip origins and destinations
  • Aggregated network measures, e.g. total flows of vehicles, total distance travelled by vehicles (VMT) , total vehicle travel time (VTT)
  • Zone-to-zone travel costs (travel time) for a given demand level
  • Obtaining modeled link flows and highlighting congested corridors
  • Analysis of turning movements for future intersection design
  • Finding out which O-D pairs have taken a particular link or path
  • Simulation of the individual’s choice for each pair of origins and destinations (Mathew & Rao, 2006)

13.2 Link Performance Function

One of the most important and fundamental concepts of the traffic assignment process is to build a link performance function . This function is usually used for estimating travel time, travel cost, and speed on the network based on the relationship between speed and travel flow. While this function can take different forms such as linear, polynomial , exponential , and hyperbolic , there are different equations that represent the above-mentioned relationship. One of the most common functions is the link cost function that represents generalized travel costs (United States Bureau of Public Roads, 1964). This equation estimates travel time on a free-flow road (travel with speed limit) adding a function that increases travel time exponentially as the road gets more congested. The congestion can be represented by the road volume to capacity ratio (Meyer, 2016). However, throughout recent years transportation planners have realized that in many cases, the delay on the links is caused by delays on the intersection, an observation that is not encompassed by the link cost function. Nevertheless, in the following sections we will resort to the traditional function. Equation (1) is the most common and general formula for link performance function.

t=t_o[1+\alpha\left(\frac{x}{k}\right)\beta]

  • t and x are the travel time and vehicle flow;
  • t 0 is the link free flow travel time;
  • k is the link capacity;
  • α and β are parameters for specific type of links and calibrated using the field data. In the absence of any field data, it is usually assumed = 0.15, and β= 4.0.

α and β are the coefficient for this formula and can take different values (model parameters). However, most studies and planning practices use the same value for them. That said, these values can be locally calibrated for the most efficient results. Also, Figure 13.1 shows the relationship between capacity and travel time. In this plot, the travel time remains constant by increase in vehicle volumes until the turning point which indicates the volume on the link is beginning to exceed the capacity.

This figure shows the exponential relationship between travel time and flow of traffic,

Figure 13.1 Link Flow and Travel Time Relationship

The following example shows how the link performance function helps us to determine the travel time according to flow and capacity.

13.2.1 Example 1

Assume the traffic volume on a path between zone i and j was 525. The travel time recorded on this path is 15 minutes. If the capacity of this path would be 550, then calculate the new travel time for future iteration of the model.

Based on the link performance function, we have:

Now we have to plug in the numbers into the formula to determine the new travel time:

t=15[1+\0.15\left(\frac{525}{550}\right)\4]=16.86

13.3 Traffic Assignment Models

Through the rest of this chapter, we are going to discuss different traffic assignment models. In general, the process of traffic assignment is usually done separately for private cars and transit systems. As we specified in previous chapters, the transit impedance function is different from private auto; thus, simulating a utility maximization behavior for a driver and rider should be different. For public transit assignment, variables such as fare, stop or transfer, waiting time, and trip times define the utility (equilibrium) (Sheffi, 1985). However, in some cases the two mentioned networks are related when public buses share highways with cars, and congestion can also affect the performance of public transit (Rojo, 2020). Typically, private car traffic assignment models the path choice of trip makers using:

  • algorithms like all-or-nothing
  • incremental
  • capacity-restrained
  • user equilibrium
  • system optimum assignment

User equilibrium is based on the principle assumption that travelers try to minimize their travel costs. In this algorithm, the equilibrium occurs when there is no user able to reduce their travel time or cost by changing path. This is the most popular algorithms employed for simulation in the U.S. (Meyer, 2016). Moreover, more recent trip assignment models use approaches such as:

  • static user-equilibrium assignment algorithm
  • “multiple-time-period assignment for multiple classes (for example, drive-alone, rideshare, and bike/walk)
  • an iterative feedback loop mechanism between, at a minimum, the network assignment step and the trip distribution step
  • separate specification of facilities like HOV and high-occupancy toll (HOT) lanes
  • independent transit assignment using congested highway travel times to estimate a bus ridership assignment” (Meyer, 2016, p.226).

13.3.1 All-or-nothing Model

Through the all-or-nothing (AON) assignment, we assume that the impedance of a road or path between each origin and destination is constant and is equal to free-flow level of service, meaning that the traffic time is not affected by the traffic flow on the path. The only logic behind this model is that each traveler simply uses the shortest path from his or her origin to the destination and no vehicle is assigned to other paths (Hui, 2014). This method is called the all-or-nothing assignment model and is the simplest one among all assignment models. This method is also called the 0-1 assignment model, and its advantage is its simple procedure and calculation. The assumptions of this method are:

  • Congestion does not affect travel time or cost, meaning that no matter how much traffic is loaded on the route, congestion does not take place.
  • Since the method assigns one route to any travel between each pair of OD, travelers traveling from particular zone to another particular zone all choose the same route (Hui, 2014).

To run the AON model, the following process can be followed:

  • Step 0: Initialization. Use free flow travel costs Ca=Ca(0) , for each link a on the empty network. Ɐ
  • Step 1: Path finding. Find the shortest path P for each zonal pair.
  • Step 2: Path flows assigning. Assign both passenger trips (hppod) and freight trips (hfpod) in PCEs from zonal o to d to path P.
  • Step 3: Link flows computing. Sum the flows on all paths going through a link as total flows of this link.

Example 2 illustrates the above-mentioned process for the AON model.

13.3.2 Example 2

Table 13.1 shows a trip distribution matrix with 4 zones. Using the travel costs between each pair of them shown in Figure 13.2, assign the traffic to the network.

Load the vehicle trips from the trip distribution table shown below using the AON technique. After assigning the traffic, illustrate the links and the traffic volume on each on them.

Table 13.1 Trip Distribution Results

This photo shows the hypothetical network and travel time between zones: 1-2: 5 mins 1-4: 10 min 4-2: 4 mins 3-2: 4 mins 3-4: 9 mins

Figure 13.2 Transportation Network

To solve this problem, we need to find the shortest path among all alternatives for each pair of zones. The result of this procedure would be 10 routes in total, each of which bears a specific amount of travels. For instance, the shortest path between zone 1 and 2 is the straight line with 5 min travel time. All other routes like 1 to 4 to 2 or 1 to 4 to 3 to 2 would be empty from travelers going from zone 1 to zone 2. The results are shown in Table 13.2.

Table 13.2 Traffic Volumes for Each Route

As you can see, some of the routes remained unused. This is because in all-or-nothing if a route has longer travel time or higher costs, then it is assumed it would not be used at all.

13.3.4 User Equilibrium

The next method for traffic assignment is called user equilibrium (UE). The rule or algorithm is adopted from the well- known Wardrop equilibrium (19 52) conditions (Correa & Stier-Moses, 2011). In this algorithm, it is assumed that travelers will always choose the shortest path and equilibrium condition would be realized when no traveler is able to decrease their travel impedance by changing paths (Levinson et al., 2014).

As we discussed, the UE method is based on the first principle of Wardrop : “for each origin-destination (OD) pair, with UE, the travel time on all used paths is equal and less than or equally to the travel time that would be experienced by a single vehicle on any unused path”(Jeihani Koohbanani, 2004). The mathematical format of this principle is shown in equation (3):

T 1 =T 2       (2)

For a given OD pair, the UE condition can be expressed in equation (3):

fk\left(ck-u\right)=0:\forall k

This means that all paths will have the same travel time. Also, for this model we have the following general assumptions:

  • The users possess all the knowledge needed about different paths.
  • The users have perfect knowledge of the path cost.
  • Travel time in a route is subject to change only by the cost flow function of that route.
  • Travel times increases as we load travel into the network (Mathew & Rao, 2006).

Hence, the UE assignment comes to an optimization problem that can be formulated using equation (4):

Minimize\ Z=\sum_{a}\int_{0}^{Xa}ta\left(xa\right)dx

k  is the path x a equilibrium flow in link a t a  travel time on link a f k rs  flow on path  connecting OD pairs q rs  trip rate between  and δ a, k rs is constraint function defined as 1 if link a belongs to path k and 0 otherwise

Example 3 shows how the UE method can be applied for the traffic assignment step. This example is a very simple network consisting of two zones with two possible paths between them.

13.3.5 Example 3

This photo shows the hypothetical network with two possible paths between two zones 1: 5=4x_1 2: 3+2x_2 (to power of two)

Figure 13.3 A Simple Two-Zone System with Cost Function

In this example, t 1 and t 2 are travel times measured by min on each route, and x 1 and x 2 are traffic flows on each route measured by (Veh/Hour).

Using the UE method, assign 4,500 Veh/Hour to the network and calculate travel time on each route after assignment, traffic volume, and system total travel time.

According to the information provided, total flow (X 1 +X 2 ) is equal to 4,500 (4.5).

First, we need to check, with all traffic assigned to one route, whether that route is still the shortest path. Thus we have:

T 1 (4.5)=23min

T 2 (0)=3min

if all traffic is assigned to route 2:

T 1 (0)=3min

T 2 (4.5)=43.5 min

Step 2: Wardrope equilibrium rule: t 1 =t 2        5+4x 1 =3+ 2x 2 2         and we have x 1 =4.5-x 2

Now the equilibrium equation can be written as: 6 + 4(4.5 − x2)=4+ x222

x 1 = 4.5 − x 2 = 1.58

Now the updated average travel times are: t 1 =5+4(1.58)=11.3min and T 2 =3+2(2.92)2=20.05min

Now the total system travel time is:

Z(x)=X 1 T 1 (X 1 )+X 2 T 2 (X 2 )=2920 veh/hr(11.32)+1585 veh/hr(20.05)=33054+31779=64833 min

13.3.6 System Optimum Assignment

One other traffic assignment model similar to the previous one is called system optimum (SO) in which the second principle of the Wardrop defines the logic of the model. Based on this principle, drivers’ rationale for choosing a path is to minimize total system costs with one another in order to minimize total system travel time (Mathew & Rao, 2006). Using the SO traffic assignment, problems like optimizing departure time for a single commuting route, minimizing total travels from multiple origins to one destination, or minimizing travel time in stochastic time-dependent OD flows from several origins to a single destination can be solved (Jeihani Koohbanani, 2004).

The basic mathematical formula for this model that satisfies the principle of the model is shown in equation (5):

minimize\ Z=\sum_{a}{xata\left(xa\right)}

In example 4, we will use the same network we described in the UE example in order to compare the results for the two models.

13.3.7 Example 4

In that simple two-zone network, we had:

T 1 =5+4X 1    T2=3+2X 2 2

Now, based on the principle of the model we have:

Z(x)=x 1 t 1 (x 1 )+x 2 t 2 (x 2 )

Z(x)=x 1 (5+4x 1 )+x 2 (3+2x 2 2 )

Z(x)=5x 1 +4x 1 2 +3x 2 +2x 2 3

From the flow conservation. we have: x 1 +x 2 =4.5     x 1 =4.5-x 2

Z(x)=5(4.5-x 2 )+4(4.5-x 2 )2+4x 2 +x 2 3

Z(x)=x 3 2 +4x 2 2 -27x 2 +103.5

In order to minimize the above equation, we have to take derivatives and equate it to zero. After doing the calculations, we have:

Based on our finding, the system travel time would be:

T 1 =5+4*1.94=12.76min     T 2 =3+ 2(2.56)2=10.52 min

And the total travel time of the system would be:

Z(x)=X 1 T 1 (X 1 )+X 2 T 2 (X 2 )=1940 veh/hr(12.76)+2560 veh/hr(10.52)=24754+26931=51685 min

13.3.8 Feedback Loop Model (Combined Traffic Assignment and Trip Distribution)

In the feedback loop model , an interaction between the trip distribution route choice step with several iterations is defined. The essence of this model is that in case a route between an origin and destination gets too congested, then the traveler may replace their destination, for instance choosing between several shopping malls available in a region. In other words, in a real-world situation, travelers usually make decisions about their travel characteristics simultaneously (Qasim, 2012).

The chart below shows how the combination of these two modes can take place:

This photo shows the feedback loop in FSM.

Figure 13.4 The Feedback Loop between the Second and the Fourth Step

Equation (6), shown below for this model, ensures convergence at the end of the model is:

Min\funcapply\sum_a\hairsp\int_0^{p_a+f_a}\hairsp C_a(x)dx+\frac{1}{\zeta}\sum_o\hairsp\sum_d\hairsp T^{od}\left(\ln\funcapply T^{od}-K\right)

where C a (t) is the same as previous

P a , is total personal trip flows on link a,

f a ; is total freight trip flows on link a,

T od is the total flow from node o to node d,

p od is personal trip from node o to node d,

F od is freight trip from node o to node d,

ζ is a parameter estimated from empirical data,

K is a parameter depending on the type of gravity model used to calculate T od , Evans (1976) proved that K’ equals to 1 for distribution using doubly constrained gravity model and it equals to 1 plus attractiveness for distribution using singly constrained model. Florian et al. (1975) ignored K for distribution using a doubly constrained gravity model because it is a constant.

13.3.9 Incremental Increase model

Another model of traffic assignment we are going to elaborate on here is called incremental increase . In this model, which is based on the logic of the AON model, a process is designed with multiple steps. In each step or level, a fraction of the total traffic volume is assigned, and travel time is calculated based on the allocated traffic volume. Through this incremental addition of traffic, the travel time of each route in step (n) is the updated travel time from the previous step (n-1)(Rojo, 2020).

The steps for the incremental increase traffic assignment model are:

  • Finding the shortest path between each pair of O-Ds
  • Assigning a portion of the trips according to the matrix (usually 40, 30, 20 and 10 percent to the shortest path)
  • Updating the travel time after each iteration (each incremental increase)
  • Continuing until all trips are assigned
  • Summing the results

Example 4 illustrates the process of this method’s implementation.

13.3.10 Example 4

A hypothetical network accommodates two zones with three possible links between them. Perform an incremental increase traffic assignment model for assigning 200 trips between the two zones with increments of: 30%, 30%, 20%, 20%. (The capacity is 50 trips.)

This photo shows the hypothetical network with two possible paths between two zones 1: 6 mins 2: 7 mins 3: 12 mins

Figure 13.5 A Two-Zone Network with Three Possible Routes

Step 1 (first iteration): Using the method of AON, we now assign the flow to the network using the function below:

t=to[1+\alpha\left(\frac{x}{k}\right)\beta]

Since the first route has the shortest travel time, the first 30% of the trips will be assigned to route 1. The updated travel time for this path would be:

t=6\left[1+0.15\left(\frac{60}{50}\right)4\right]=7.86

And the remaining route will be empty, and thus their travel times are unchanged.

Step 2 (second iteration): Now, we can see that the second route has the shortest travel time, with 30% of the trips being assigned to this route, and the new travel time would be:

t=7\left[1+0.15\left(\frac{60}{50}\right)4\right]=9.17

Step 3 (third iteration): In the third step, the 20% of the remaining trips will be assigned to the shortest path, which in this case is the first route again. The updated travel time for this route is:

t=7.86\left[1+0.15\left(\frac{40}{50}\right)4\right]=8.34

Step 4 (fourth iteration): In the last iteration, the remaining 10% would be assigned to first route, and the time is:

t=8.34\left[1+0.15\left(\frac{40}{50}\right)4\right]=8.85

Finally, we can see that route 1 has a total of 140 trips with a 8.85 travel time, the second route has a total of 60 trips with a 9.17 travel time, and the third route was never used.

13.3.11 Capacity Restraint Assignment

Thus far, in all the algorithms or rules presented, the capacity of the link was incorporated into the model, and travel time was the only factor for assigning the flow to a link. In this model, after each iteration, the total number of trips are compared with the capacity to observe how much increase in travel time was realized by the added volume. In this model, the iteration stops if the added volume in step (n) does not change the travel time updated in step (n-1). With the incorporation of such a constraint, the cost or performance function would be different from cost functions discussed in previous algorithms (Mathew & Rao, 2006). Figure 13.6 visualizes the relationship between flow and travel time with a capacity constraint.

This figure shows the exponential relationship between travel time and flow of traffic with capacity line.

Figure 13.6 Link Flow and Travel Time Relationship

Based on this capacity constraint specific to each link, the α, β can be readjusted for different links such as highways, freeways, and other roads.

13.3.12 Stochastic User Equilibrium Traffic Assignment

Stochastic user equilibrium traffic assignment is a sophisticated and more realistic model, in which the level of uncertainty in regard to which link should be used based on a measurement of utility function is introduced. This model performs a discrete choice analysis through a logistic model. In this model, it is assumed that based on the first Wardrop principle, all drivers perceive the costs of traveling in each link identically and choose the route with minimum cost. In stochastic UE, however, the model allows different individuals to have different perceptions about the costs, and thus, they may choose non-minimum cost routes as well( Mathew & Rao, 2006). In this model, the flow is assigned to almost all links from the beginning (unlike previous models) which is closer to reality. The probability of using each path is calculated with the following logit formula shown in equation (7):

Pi=\frac{e^{ui}}{\sum_{i=1}^{k}e^{ui}}

P i is the probability of using path i

U i is the utility function for path i

In the following, an example of a simple network is presented.

13.3.13 Example 6

There is a flow of 200 trips between two points and their possible path, each of which has a travel time specified in Figure 13.7.

This photo shows the hypothetical network with two possible paths between two zones 1: 21 mins 2: 23 mins 3: 26 mins

Figure 13.7 A Simple Two-Zone Network with Three Links

Using the mentioned logit formula for these paths, we have:

P1=\frac{e^{-21i}}{e^{-21i}+e^{-23}+e^{-26i}}=0.875

Based on the calculated probabilities, the distribution of the traffic flow would be:

Q 1 =175 trips

Q 2 =24 trips

Q 3 =1 trips

13.3.14 Dynamic Traffic Assignment

Recall the first Wardrop principle, in which travelers are believed to choose their routes with the minimum cost. Dynamic traffic assignment is based on the same rule, but the difference is that delays resulted from congestion. In this way, not only travelers’ route choice affects the network’s level of service, but also the network’s level of service affects travelers’ choice as well. However, it is not theoretically proven that an equilibrium would result under such conditions (Mathew & Rao, 2006).

Today, various algorithms are developed to solve traffic assignment problems. In any urban transportation system, travelers’ route choice and different links’ level of service have a dynamic feedback loop and affect each other simultaneously. However, a lot of these rules are not present in the models presented here. In real world cases, there can be more than thousands of nodes and links in the network, and therefore more sensitivity to dynamic changes is required for a realistic traffic assignment (Meyer, 2016). Also, the travel demand model applies a linear sequence of the four steps, in which case it is also unlike reality. In fact, travelers may have narrow knowledge about all possible paths, modes, and opportunities and may not make rational decisions.

Route choice is the process of choosing a certain path for a trip from a very large choice sets.

Regional Transportation Plan is long term planning document for a region’s transportation usually updated every five years.

Vehicles (VMT) is the aggregate number of miles deriven from in an area in particular time of day.

Total vehicle travel time is the aggregate amount of time spent in transportation usually in minutes.

Link performance function is function used for estimating travel time, travel cost, and speed on the network based on the relationship between speed and travel flow.

Hyperbolic function is a function used for linear differential equations like calculating distances and angels in hyperbolic geometry.

Free-flow road is situation where vehicles can travel with the maximum allowed travel speed.

  • Algorithms like all-or-nothing an assignment model where we assume that the impedance of a road or path between each origin and destination is constant and is equal to free-flow level of service, meaning that the traffic time is not affected by the traffic flow on the path.

Capacity-restrained is a model which takes into account the capacity of a road compared to volume and updates travel times.

User equilibrium is a traffic assignment model where we assume that travelers will always choose the shortest path and equilibrium condition would be realized when no traveler is able to decrease their travel impedance by changing paths.

System optimum assignment is an assignment model based on the principle that drivers’ rationale for choosing a path is to minimize total system costs with one another in order to minimize total system travel time.

Static user-equilibrium assignment algorithm is an iterative traffic assignment process which assumes that travelers chooses the travel path with minimum travel time subject to constraints.

  • Iterative feedback loop is a model that iterates between trip distribution and route choice step based on the rational that if a path gets too congested, the travel may alter travel destination.
  • First principle of Wardrop is the assumption that for each origin-destination (OD) pair, with UE, the travel time on all used paths is equal and less than or equally to the travel time that would be experienced by a single vehicle on any unused path.
  • System optimum (SO) is a condition in trip assignment model where total travel time for the whole area is at a minimum.
  • Stochastic time-dependent OD is a modeling framework where generation and distribution of trips are randomly assigned to the area.
  • Incremental increase is AON-based model with multiple steps in each of which, a fraction of the total traffic volume is assigned, and travel time is calculated based on the allocated traffic volume.
  • Stochastic user equilibrium traffic assignment employs a probability distribution function that controls for uncertainties when drivers compare alternative routes and make decisions.
  • Dynamic traffic assignment is a model based on Wardrop first principle in which delays resulted from congestion is incorporated in the algorithm.

Key Takeaways

In this chapter, we covered:

  • Traffic assignment is the last step of FSM, and the link cost function is a fundamental concept for traffic assignment.
  • Different static and dynamic assignments and how to perform them using a simplistic transportation network.
  • Incorporating stochastic decision-making about route choice and how to solve assignment problems with regard to this feature.

Correa, J.R., & Stier-Moses, N.E.(2010).Wardrope equilibria. In J.J. Cochran( Ed.), Wiley encyclopedia of operations research and management science (pp.1–12). Hoboken, NJ: John Wiley & Sons. http://dii.uchile.cl/~jcorrea/papers/Chapters/CS2010.pdf

Hui, C. (2014). Application study of all-or-nothing assignment method for determination of logistic transport route in urban planning. Computer Modelling & New Technologies , 18 , 932–937. http://www.cmnt.lv/upload-files/ns_25crt_170vr.pdf

Jeihani Koohbanani, M. (2004).  Enhancements to transportation analysis and simulation systems (Unpublished Doctoral dissertation, Virginia Tech). https://vtechworks.lib.vt.edu/bitstream/handle/10919/30092/dissertation-final.pdf?sequence=1&isAllowed=y

Levinson, D., Liu, H., Garrison, W., Hickman, M., Danczyk, A., Corbett, M., & Dixon, K. (2014). Fundamentals of transportation . Wikimedia. https://upload.wikimedia.org/wikipedia/commons/7/79/Fundamentals_of_Transportation.pdf

Mathew, T. V., & Rao, K. K. (2006). Introduction to transportation engineering. Civil engineering–Transportation engineering. IIT Bombay, NPTEL ONLINE, Http://Www. Cdeep. Iitb. Ac. in/Nptel/Civil% 20Engineering .

Meyer, M. D. (2016). Transportation planning handbook . John Wiley & Sons.

Qasim, G. (2015). Travel demand modeling: AL-Amarah city as a case study . [Unpublished Doctoral dissertation , the Engineering College University of Baghdad]

Rojo, M. (2020). Evaluation of traffic assignment models through simulation. Sustainability , 12 (14), 5536. https://doi.org/10.3390/su12145536

Sheffi, Y. (1985). Urban transportation networks: Equilibrium analysis with mathematical programming method . Prentice-Hall. http://web.mit.edu/sheffi/www/selectedMedia/sheffi_urban_trans_networks.pdf

US Bureau of Public Roads.  (1964). Traffic assignment manual for application with a large, high speed computer . U.S. Department of Commerce, Bureau of Public Roads, Office of Planning, Urban Planning Division.

https://books.google.com/books/about/Traffic_Assignment_Manual_for_Applicatio.html?id=gkNZAAAAMAAJ

Wang, X., & Hofe, R. (2008). Research methods in urban and regional planning . Springer Science & Business Media.

Polynomial is distribution that involves the non-negative integer powers of a variable.

Hyperbolic function is a function that the uses the variable values as the power to the constant of e.

A point on the curve where the derivation of the function becomes either maximum or minimum.

all-or-nothing is an assignment model where we assume that the impedance of a road or path between each origin and destination is constant and is equal to free-flow level

Incremental model is a model that the predictions or estimates or fed into the model for forecasting incrementally to account for changes that may occur during each increment.

Iterative feedback loop is a model that iterates between trip distribution and route choice step based on the rational that if a path gets too congested, the travel may alter travel destination

feedback loop model is type of dynamic traffic assignment model where an iteration between route choice and traffic assignment step is peformed, based on the assumption that if a particular route gets heavily congested, the travel may change the destination (like another shopping center).

Transportation Land-Use Modeling & Policy Copyright © by Mavs Open Press. All Rights Reserved.

Share This Book

Help Center Help Center

  • Help Center
  • Trial Software
  • Product Updates
  • Documentation

Introduction to Assignment Methods in Tracking Systems

In a multiple target tracking (MTT) system, one or more sensors generate multiple detections from multiple targets in a scan. To track these targets, one essential step is to assign these detections correctly to the targets or tracks maintained in the tracker so that these detections can be used to update these tracks. If the number of targets or detections is large, or there are conflicts between different assignment hypotheses, assigning detections is challenging.

Depending on the dimension of the assignment, assignment problems can be categorized into:

2-D assignment problem – assigns n targets to m observations. For example, assign 5 tracks to 6 detections generated from one sensor at one time step.

S-D assignment problem – assigns n targets to a set ( m 1 , m 2 , m 3 , …) of observations. For example, assign 5 tracks to 6 detections from one sensor, and 4 detections from another sensor at the same time. This example is a typical 3-D assignment problem.

To illustrate the basic idea of an assignment problem, consider a simple 2-D assignment example. One company tries to assign 3 jobs to 3 workers. Because of the different experience levels of the workers, not all workers are able to complete each job with the same effectiveness. The cost (in hours) of each worker to finish each job is given by the cost matrix shown in the table. An assignment rule is that each worker can only take one job, and one job can only be taken by one worker. To guarantee efficiency, the object of this assignment is to minimize the total cost.

Since the numbers of workers and jobs are both small in this example, all the possible assignments can be obtained by enumeration, and the minimal cost solution is highlighted in the table with assignment pairs (1, 3), (2, 2) and (3, 1). In practice, as the size of the assignment becomes larger, the optimal solution is difficult to obtain for 2-D assignment. For an S-D assignment problem, the optimal solution may not be obtainable in practice.

2-D Assignment in Multiple Target Tracking

In the 2-D MTT assignment problem, a tracker tries to assign multiple tracks to multiple detections. Other than the dimensionality challenge mentioned above, a few other factors can significantly change the complexity of the assignment:

Target or detection distribution — If targets are sparsely distributed, associating a target to its corresponding detection is relatively easy. However, if targets or detections are densely distributed, assignments become ambiguous because assigning a target to a detection or another nearby detection rarely makes any differences on the cost.

Probability of detection ( P d ) of the sensor — P d describes the probability that a target is detected by the sensor if the target is within the field of view of the sensor. If the P d of a sensor is small, then the true target may not give rise to any detection during a sensor scan. As a result, the track represented by the true target may steal detections from other tracks.

Sensor resolution — Sensor resolution determines the sensor’s ability to distinguish the detections from two targets. If the sensor resolution is low, then two targets in proximity may only give rise to one detection. This violates the common assumption that each detection can only be assigned to one track and results in unresolvable assignment conflicts between tracks.

Clutter or false alarm rate of the sensor — False alarms introduce additional possible assignments and therefore increase the complexity of data assignment.

The complexity of the assignment task can determine which assignment methods to apply. In Sensor Fusion and Tracking Toolbox™ toolbox, three 2-D assignment approaches are employed corresponding to three different trackers:

trackerGNN — adopts a global nearest data assignment approach

trackerJPDA — adopts a joint probability data association approach

trackerTOMHT — adopts a tracker-oriented multiple hypothesis tracking approach

Note that each tracker processes the data from sensors sequentially, meaning that each tracker only deals with the assignment problem with the detections of one sensor at a time. Even with this treatment, there may still be too many assignment pairs. To reduce the number of track and detection pairs considered for assignment, the gating technique is frequently used.

Gating is a screening mechanism to determine which observations are valid candidates to update existing tracks and eliminate unlikely detection-to-track pairs using the distribution information of the predicted tracks. To establish the validation gate for a track at the current scan, the estimated track for the current step is predicted from the previous step.

For example, a tracker confirms a track at time t k and receives several detections at time t k +1 . To form a validation gate at time t k +1 , the tracker first needs to obtain the predicted measurement as:

y ^ k + 1 = h ( x ^ k + 1 | k )

assignment distribution algorithm

y ˜ = y k + 1 − y ^ k + 1

where y k +1 is the actual measurement. To establish the boundary of the gate, the detection residual covariance S is used to form an ellipsoidal validation gate. The ellipsoidal gate that establishes a spatial ellipsoidal region in the measurement space is defined in Mahalanobis distance as:

d 2 ( y k + 1 ) = y ˜ T S − 1 y ˜ ≤ G

where G is the gating threshold which you can specify based on the assignment requirement. Increasing the threshold can incorporate more detections into the gate.

After the assignment gate is established for each track, the gating status of each detection y i ( i = 1,…, n ) can be determined by comparing its Mahalanobis distance d 2 ( y i ) with the gating threshold G . If d 2 ( y i ) < G , then detection y i is inside the gate of the track and will be considered for association. Otherwise, the possibility of the detection associated with the track is removed. In Figure 1, T 1 represents a predicted track estimate, and O 1 – O 6 are six detections. Based on the gating result, O 1 , O 2 , and O 3 are within the validation gate in the figure.

Detections and Validation Gate

Global Nearest Neighbor (GNN) Method

The GNN method is a single hypothesis assignment method. For each new data set, the goal is to assign the global nearest observations to existing tracks and to create new track hypotheses for unassigned detections.

The GNN assignment problem can be easily solved if there are no conflicts of association between tracks. The tracker only needs to assign a track to its nearest neighbor. However, conflict situations (see Figure 2) occur when there is more than one observation within a track’s validation gate or an observation is in the gates of more than one track. To resolve these conflicts, the tracker must evaluate a cost matrix.

GNN with Association Conflicts

The elements of a cost matrix for the GNN method includes the distance from tracks to detections and other factors you might want to consider. For example, one approach is to define a generalized statistical distance between observation j to track i as:

C i j = d i j + ln ( | S i j | )

where d ij is the Mahalanobis distance and ln(| S ij |), the logarithm of the determinant of the residual covariance matrix, is used to penalize tracks with greater prediction uncertainty.

For the assignment problem given in Figure 2, the following table shows a hypothetical cost matrix. The nonallowed assignments, which failed the gating test, are denoted by X. (In practice, the costs of nonallowed assignments can be denoted by large values, such as 1000.)

For this problem, the highlighted optimal solution can be found by enumeration. Detection O 3 is unassigned, so the tracker will use it to create a new tentative track. For more complicated GNN assignment problems, more accurate formulations and more efficient algorithms to obtain the optimal or suboptimal solution are required.

A general 2-D assignment problem can be formed as following. Given the cost matrix element C ij , find an assignment Z = { z ij } that minimizes

J = ∑ i = 0 n ∑ j = 0 m C i j z i j

subject to two constraints:

∑ i = 0 m z i j = 1 , ∀ j ∑ j = 0 n z i j = 1 , ∀ i

If track i is assigned to observation j , then z ij = 1. Otherwise, z ij = 0. z i 0 = 1 represents the hypothesis that track i is not assigned to any detection. Similarly, z 0 j = 1 represents the hypothesis that observation j is not assigned to any track. The first constraint means each detection can be assigned to no more than one track. The second constraint means each track can be assigned to no more than one detection.

Sensor Fusion and Tracking Toolbox provides multiple functions to solve 2-D GNN assignment problems:

assignmunkres – Uses the Munkres algorithm, which guarantees an optimal solution but may require more calculation operations.

assignauction – Uses the auction algorithm, which requires fewer operations but can possibly converge on an optimal or suboptimal solution.

assignjv – Uses the Joker-Volgenant algorithm, which also converges on an optimal or suboptimal solution but usually with a faster converging speed.

In trackerGNN , you can select the assignment algorithm by specifying the Assignment property.

K Best Solutions to the 2-D Assignment Problem

Because of the uncertainty nature of assignment problems, only obtaining a solution (optimal or suboptimal) may not be sufficient. To account for multiple hypotheses about the assignment between tracks and detections, multiple suboptimal solutions are required. These suboptimal solutions are called K best solutions to the assignment problem.

The K best solutions are usually obtained by varying the solution obtained by any of the previously mentioned assignment functions. Then, at the next step, the K best solution algorithm removes one track-to-detection pair in the original solution and finds the next best solution. For example, for this cost matrix:

[ 10 5 8 9 7 × 20 × × × 21 1 5 × 17 × × × × 1 6 22 ]

each row represents the cost associated with a track, and each column represents the cost associated with a detection. As highlighted, the optimal solution is (7,15,16, 9) with a cost of 47. In the next step, remove the first pair (corresponding to 7), and the next best solution is (10,15, 20, 22) with a cost of 67. After that, remove the second pair (corresponding to 15), and the next best solution is (7, 5,16, 9) with a cost of 51. After a few steps, the five best solutions are:

See the Find Five Best Solutions Using Assignkbest example, which uses the assignkbest function to find the K best solutions.

Joint Probability Data Association (JPDA) Method

While the GNN method makes a rigid assignment of a detection to a track, the JPDA method applies a soft assignment so that detections within the validation gate of a track can all make weighted contributions to the track based on their probability of association.

assignment distribution algorithm

y ˜ 1 = ∑ j = 1 3 p 1 j y ˜ 1 j

In the tracker, the weighted residual is used to update track T 1 in the correction step of the tracking filter. In the filter, the probability of unassignment, p 10 , is also required to update track T 1 . For more details, see JPDA Correction Algorithm for Discrete Extended Kalman Filter .

The JPDA method requires one more step when there are conflicts between assignments in different tracks. For example, in the following figure, track T 2 conflicts with T 1 on the assignment of observation O 3 . Therefore, to calculate the association probability p 13 , the joint probability that T 2 is not assigned to O 3 (that is T 2 is assigned to O 6 or unassigned) must be accounted for.

Two Validation Gates Overlap

Track-Oriented Multiple Hypothesis Tracking (TOMHT) Method

Unlike the JPDA method, which combines all detections within the validation gate using a weighted sum, the TOMHT method generates multiple hypotheses or branches of the tracks based on the detections within the gate and propagates high-likelihood branches between scan steps. After propagation, these hypotheses can be tested and pruned based on the new set of detections.

For example, for the gating scenario shown in Figure 1, a TOMHT tracker considers the following four hypotheses:

Assign no detection to T 1 resulting in hypothesis T 10

Assign O 1 to T 1 resulting in hypothesis T 11

Assign O 2 to T 1 resulting in hypothesis T 12

Assign O 3 to T 1 resulting in hypothesis T 13

Given the assignment threshold, the tracker will calculate the possibility of each hypothesis and discard hypotheses with probability lower than the threshold. Hypothetically, if only p 10 and p 11 are larger than the threshold, then only T 10 and T 11 are propagated to the next step for detection update.

S-D Assignment in Multiple Target Tracking

In an S-D assignment problem, the dimension of assignment S is larger than 2. Note that all three trackers ( trackerGNN , trackerJPDA , and trackerTOMHT ) process detections from each sensor sequentially, which results in a 2-D assignment problem. However, some applications require a tracker that processes simultaneous observations from multiple sensor scans all at once, which requires solving an S-D assignment problem. Meanwhile, the S-D assignment is widely used in tracking applications such as static data fusion, which preprocesses the detection data before fed to a tracker.

An S-D assignment problem for static data fusion has S scans of a surveillance region from multiple sensors simultaneously, and each scan consists of multiple detections. The detection sources can be real targets or false alarms. The object is to detect an unknown number of targets and estimate their states. For example, as shown in the Figure 4, three sensor scans produce six detections. The detections in the same color belong to the same scan. Since each scan generates two detections, there are probably two targets in the region of surveillance. To choose between different assignment or association possibilities, evaluate the cost matrix.

Region of Surveillance

The calculation of the cost can depend on many factors, such as the distance between detections and the covariance distribution of each detection. To illustrate the basic concept, the assignment costs for a few hypotheses are hypothetically given in the table [1] .

In the table, 0 denotes a track is associated with no detection in that scan. Assume the hypotheses not shown in the table are truncated by gating or neglected because of high costs. To concisely represent each track, use c ijk to represent the cost for association of observation i in scan 1, j in scan 2, and k in scan 3. For example, for the assignment hypothesis 1, c 011 = -10.2. Several track hypotheses conflict with other in the table. For instance, the two most likely assignments, c 111 and c 121 are incompatible because they share the same observation in scans 1 and 3.

The goal of solving an S-D assignment problem is to find the most likely compatible assignment hypothesis accounting for all the detections. When S ≥ 3, however, the problem is known to scale with the number of tracks and detections at an exponential rate (NP-hard). The Lagrangian relaxation method is commonly used to obtain the optimal or sub-optimal solution for an S-D assignment problem efficiently.

Brief Introduce to the Lagrangian Relaxation Method for 3-D Assignment

Three scans of data have a number of M 1 , M 2 , and M 3 observations, respectively. Denote an observation of scan 1, 2, and 3 as i , j , and k , respectively. For example, i = 1, 2, …, M 1 . Use z ijk to represent the track formation hypothesis of O 1 i , O 2 j , and O 3 k . If the hypothesis is valid, then z ijk = 1; otherwise, z ijk = 0. As mentioned, c ijk is used to represent the cost of z ijk association. c ijk is 0 for false alarms and negative for possible associations. The S-D optimization problem can be formulated as:

J ( z ) = min i , j , k ∑ i = 0 M 1 ∑ j = 0 M 2 ∑ k = 0 M 3 c i j k z i j k

subject to three constraints:

∑ i = 0 M 1 ∑ j = 0 M 2 z i j k = 1 , ∀ k = 1 , 2 , … , M 3 ∑ i = 0 M 1 ∑ k = 0 M 3 z i j k = 1 , ∀ j = 1 , 2 , … , M 2 ∑ j = 0 M 2 ∑ k = 0 M 3 z i j k = 1 , ∀ i = 1 , 2 , … , M 1

The optimization function chooses associations to minimize the total cost. The three constraints ensure that each detection is accounted for (either included in an assignment or treated as false alarm).

The Lagrangian relaxation method approaches this 3-D assignment problem by relaxing the first constraint using Lagrange multipliers. Define a new function L ( λ ) :

L ( λ ) = ∑ k = 0 M 3 λ k [ ∑ i = 0 M 1 ∑ j = 0 M 2 z i j k − 1 ]

where λ k , k = 1, 2, …, M 3 are Lagrange multipliers. Subtract L from the original object function J ( z ) to get a new object function, and the first constraint in k is relaxed. Therefore, the 3-D assignment problem reduces to a 2-D assignment problem, which can be solved by any of the 2-D assignment method. For more details, see [1] .

The Lagrangian relaxation method allows the first constraint to be mildly violated, and therefore can only guarantee a suboptimal solution. For most applications, however, this is sufficient. To specify the solution accuracy, the method uses the solution gap, which defines the difference between the current solution and the potentially optimistic solution. The gap is nonnegative, and a smaller solution gap corresponds to a solution closer to the optimal solution.

Sensor Fusion and Tracking Toolbox provides assignsd to solve for S-D assignment using the Lagrangian relaxation method. Similar to the K best 2-D assignment solver assignkbest , the toolbox also provides a K best S-D assignment solver, assignkbestsd , which is used to provide multiple suboptimal solutions for an S-D assignment problem.

See Tracking Using Distributed Synchronous Passive Sensors for the application of S-D assignment in static detection fusion.

assignTOMHT | assignauction | assignjv | assignkbest | assignkbestsd | assignmunkres | assignsd | trackerGNN | trackerJPDA | trackerTOMHT

[1] Blackman, S., and R. Popoli. Design and Analysis of Modern Tracking Systems. Artech House Radar Library, Boston, 1999.

[2] Musicki, D., and R. Evans. "Joint Integrated Probabilistic Data Association: JIPDA." IEEE Transactions on Aerospace and Electronic Systems. Vol. 40, Number 3, 2004, pp 1093 –1099.

MATLAB Command

You clicked a link that corresponds to this MATLAB command:

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

  • Switzerland (English)
  • Switzerland (Deutsch)
  • Switzerland (Français)
  • 中国 (English)

You can also select a web site from the following list:

How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

  • América Latina (Español)
  • Canada (English)
  • United States (English)
  • Belgium (English)
  • Denmark (English)
  • Deutschland (Deutsch)
  • España (Español)
  • Finland (English)
  • France (Français)
  • Ireland (English)
  • Italia (Italiano)
  • Luxembourg (English)
  • Netherlands (English)
  • Norway (English)
  • Österreich (Deutsch)
  • Portugal (English)
  • Sweden (English)
  • United Kingdom (English)

Asia Pacific

  • Australia (English)
  • India (English)
  • New Zealand (English)

Contact your local office

  • 90% Refund @Courses
  • Trending Now
  • Data Structures & Algorithms
  • Foundational Courses
  • Data Science
  • Practice Problem
  • Machine Learning
  • System Design
  • DevOps Tutorial
  • Web Development
  • Web Browser

Related Articles

  • DSA to Development
  • Distributed Systems Tutorial

Introduction to Distributed System

  • What is a Distributed System?
  • Features of Distributed Operating System
  • Evolution of Distributed Computing Systems
  • Types of Transparency in Distributed System
  • What is Scalable System in Distributed System?
  • Role of Middleware in Distributed System
  • Difference between Hardware and Middleware
  • What is Groupware in Distributed System?
  • Difference between Parallel Computing and Distributed Computing
  • Difference between Loosely Coupled and Tightly Coupled Multiprocessor System
  • Design Issues of Distributed System
  • Introduction to Distributed Computing Environment (DCE)
  • Limitation of Distributed System
  • Various Failures in Distributed System
  • Types of Operating Systems
  • Types of Distributed System
  • Comparison - Centralized, Decentralized and Distributed Systems
  • Three-Tier Client Server Architecture in Distributed System

Communication in Distributed Systems

  • Features of Good Message Passing in Distributed System
  • Issues in IPC By Message Passing in Distributed System
  • What is Message Buffering?
  • Multidatagram Messages in Distributed System
  • Group Communication in distributed Systems

Remote Procedure Calls in Distributed System

  • What is RPC Mechanism in Distributed System?
  • Distributed System - Transparency of RPC
  • Stub Generation in Distributed System
  • Marshalling in Distributed System
  • Server Management in Distributed System
  • Distributed System - Parameter Passing Semantics in RPC
  • Distributed System - Call Semantics in RPC
  • Communication Protocols For RPCs
  • Client-Server Model
  • Lightweight Remote Procedure Call in Distributed System
  • Difference Between RMI and DCOM
  • Difference between RPC and RMI

Synchronization in Distributed System

  • Synchronization in Distributed Systems
  • Logical Clock in Distributed System
  • Lamport's Algorithm for Mutual Exclusion in Distributed System
  • Vector Clocks in Distributed Systems
  • Event Ordering in Distributed System
  • Mutual exclusion in distributed system
  • Performance Metrics For Mutual Exclusion Algorithm
  • Cristian's Algorithm
  • Berkeley's Algorithm
  • Difference between Token based and Non-Token based Algorithms in Distributed System
  • Ricart–Agrawala Algorithm in Mutual Exclusion in Distributed System
  • Suzuki–Kasami Algorithm for Mutual Exclusion in Distributed System

Source Management and Process Management

  • Features of Global Scheduling Algorithm in Distributed System
  • What is Task Assignment Approach in Distributed System?
  • Load Balancing Approach in Distributed System
  • Load-Sharing Approach in Distributed System
  • Difference Between Load Balancing and Load Sharing in Distributed System
  • Process Migration in Distributed System

Distributed File System and Distributed shared memory

  • What is DFS (Distributed File System)?
  • Andrew File System
  • File Service Architecture in Distributed System
  • File Models in Distributed System
  • File Accessing Models in Distributed System
  • File Caching in Distributed File Systems
  • What is Replication in Distributed System?
  • Atomic Commit Protocol in Distributed System
  • Design Principles of Distributed File System
  • What is Distributed shared memory and its advantages
  • Architecture of Distributed Shared Memory(DSM)
  • Difference between Uniform Memory Access (UMA) and Non-uniform Memory Access (NUMA)
  • Algorithm for implementing Distributed Shared Memory
  • Consistency Model in Distributed System
  • Distributed System - Thrashing in Distributed Shared Memory

Distributed Scheduling and Deadlock

  • Scheduling and Load Balancing in Distributed System
  • Issues Related to Load Balancing in Distributed System

Components of Load Distributing Algorithm | Distributed Systems

  • Distributed System - Types of Distributed Deadlock
  • Deadlock Detection in Distributed Systems
  • Conditions for Deadlock in Distributed System
  • Deadlock Handling Strategies in Distributed System
  • Deadlock Prevention Policies in Distributed System
  • Chandy-Misra-Haas's Distributed Deadlock Detection Algorithm
  • Security in Distributed System
  • Types of Cyber Attacks
  • Cryptography and its Types
  • Implementation of Access Matrix in Distributed OS
  • Digital Signatures and Certificates
  • Design Principles of Security in Distributed System

Distributed Multimedia and Database System

  • Distributed Database System
  • Functions of Distributed Database System
  • Multimedia Database

Distributed Algorithm

  • Deadlock-Free Packet Switching
  • Wave and Traversal Algorithm in Distributed System
  • Election algorithm and distributed processing
  • Introduction to Common Object Request Broker Architecture (CORBA) - Client-Server Software Development
  • Difference between CORBA and DCOM
  • Difference between COM and DCOM
  • Life cycle of Component Object Model (COM) Object
  • Distributed Component Object Model (DCOM)

Distributed Transactions

  • Flat & Nested Distributed Transactions
  • Transaction Recovery in Distributed System
  • Mechanism for building Distributed file system
  • Two Phase Commit Protocol (Distributed Transaction Management)

Introduction : The goal of distributed scheduling is to distribute a system’s load across available resources in a way that optimizes overall system performance while maximizing resource utilization. The primary concept is to shift workloads from strongly laden machines to idle or lightly loaded machines. To fully utilize the computing capacity of the Distributed Systems, good resource allocation schemes are required. A distributed scheduler is a resource management component of a DOS that focuses on dispersing the system’s load among the computers in a reasonable and transparent manner. The goal is to maximize the system’s overall performance. A locally distributed system is made up of a group of independent computers connected by a local area network. Users submit tasks for processing at their host computers. Because of the unpredictable arrival of tasks and their random CPU service time, load distribution is essential in such an environment. 

The length of resource queues, particularly the length of CPU queues, are useful indicators of demand since they correlate closely with task response time. It is also fairly simple to determine the length of a queue. However, there is a risk in oversimplifying scheduling decisions. 

A number of remote servers, for example, could notice at the same time that a particular site had a short CPU queue length and start a lot of process transfers. As a result, that location may become overburdened with processes, and its initial reaction may be to try to relocate them. We don’t want to waste resources (CPU time and bandwidth) by making poor decisions that result in higher migration activity because migration is an expensive procedure. Therefore, we need proper load distributing algorithms.

Static, dynamic, and adaptive load distribution algorithms are available.  Static indicates that decisions on process assignment to processors are hardwired into the algorithm, based on a priori knowledge, such as that gleaned via an analysis of the application’s graph model.  Dynamic algorithms use system state information to make scheduling decisions, allowing them to take use of under utilized system resources at runtime while incurring the cost of gathering system data.  To adjust to system loading conditions, an adaptive algorithm alters the parameters of its algorithm. When system demand or communication is high, it may lower the amount of information required for scheduling decisions.

Components of Load Distributing Algorithm : A load distributing algorithm has 4 components –

  • Transfer Policy – Determine whether or not a node is in a suitable state for a task transfer.
  • Process Selection Policy –  Determines the task to be transferred.
  • Site Location Policy – Determines the node to which a task should be transferred to when it is selected for transfer.
  • Information Policy –  It is in-charge of initiating the gathering of system state data.

A transfer policy requires information on the local nodes state to make the decisions. A location policy requires information on the states of the remote nodes to make the decision.

1. Transfer Policy  – Threshold policies make up a substantial portion of transfer policies. The threshold is measured in units of load. The transfer policy determines that a node is a Sender when a new task begins at that node and the load at the node exceeds a threshold T. If the node’s load falls below T, the transfer policy determines that the node can be used as a remote task recipient. 

2. Selection Policy – A selection policy decides which task in the node should be transferred (as determined by the transfer policy). If the selection policy cannot locate an appropriate job in the node, the transfer procedure is halted until the transfer policy signals that the site is a sender again. The selection policy selects a task for transfer after the transfer policy decides that the node is a sender. 

  • The most straightforward method is to choose a recently generated task that has led the node to become a sender by exceeding the load threshold.
  • On the other way, a job is only transferred if its response time improves as a result of the transfer.

Other criteria to consider in a task selection approach are: first, the overhead imposed by the transfer should be as low as possible, and second, the number of location-dependent calls made by the selected task should be as low as possible. 

3. Location Policy –  The location policy’s job is to discover suitable nodes for sharing. After the transfer policy has determined that a task should be transmitted, the location policy must determine where the task should be sent. This will be based on data collected through the information policy. Polling is a widely used approach for locating a suitable node. In polling, a node polls another node to see if it is a suitable load-sharing node. Nodes can be polled sequentially or concurrently. A site polls other sites in a sequential or parallel manner to see whether they are acceptable for a transfer and/or if they are prepared to accept one. For polling, nodes could be chosen at random or more selectively depending on information obtained during prior polls. It’s possible that the number of sites polled will change.

4. Information Policy – The information policy is in charge of determining when information regarding the states of the other nodes  in the system should be collected. Most information policies fall into one of three categories:   

  • Demand – driven – Using sender initiated or receiver initiated polling techniques, a node obtains the state of other nodes only when it desires to get involved in either sending or receiving tasks. Because their actions are dependent on the status of the system, demand-driven policies are inherently adaptive and dynamic. The policy here can be sender initiative : sender looks for receivers to transfer the load, receiver initiated – receivers solicit load from the senders and symmetrically initiated – a combination of both sender & receiver initiated.
  • Periodic –  At regular intervals, nodes exchange data. To inform localization algorithms, each site will have a significant history of global resource utilization over time. At large system loads, the benefits of load distribution are negligible, and the periodic exchange of information may thus be an unnecessary overhead.
  • State – change – driven –   When a node’s state changes by a specific amount, it sends out state information. This data could be forwarded to a centralized load scheduling point or shared with peers. It does not collect information about other nodes like demand-driven policy. This policy does not alter its operations in response to changes in system state. For example, if the system is already overloaded, exchanging system state information on a regular basis will exacerbate the problem.

Whether you're preparing for your first job interview or aiming to upskill in this ever-evolving tech landscape, GeeksforGeeks Courses are your key to success. We provide top-quality content at affordable prices, all geared towards accelerating your growth in a time-bound manner. Join the millions we've already empowered, and we're here to do the same for you. Don't miss out - check it out now !

Please Login to comment...

  • Distributed System
  • varshagumber28

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • PeerJ Comput Sci

Logo of peerjcs

A novel framework for storage assignment optimization inspired by finite element method

Seyed-kourosh tabatabaei.

1 Industrial Engineering Group, Faculty of Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran

Omid Fatahi Valilai

2 Mathematics & Logistics Department, Jacobs University Bremen, Bremen, Bremen, Germany

Ali Abedian

3 Aerospace Engineering Department, Sharif University of Technology, Tehran, Tehran, Iran

Mohammad Khalilzadeh

Associated data.

The following information was supplied regarding data availability:

Raw data is available at Figshare:

Fatahi Valilai, Omid (2020): Data set used in case study. figshare. Dataset. https://doi.org/10.6084/m9.figshare.13299269.v1 .

Code is also available at Figshare:

Fatahi Valilai, Omid (2020): inventory optimization code. figshare. Software. https://doi.org/10.6084/m9.figshare.13299278 .

Considering necessary fundamental and structural changes in the production and manufacturing industries to fulfill the industry 4.0 paradigm, the proposal of new ideas and frameworks for operations management of production and manufacturing system is inevitable. This research focuses on traditional methods proposed for storage assignment problem and struggles for new methods and definitions for industry 4.0 based storage assignment concepts. At the first step, the paper proposes a new definition of storage assignment and layout problem for fulfilling storage mechanism agility in terms of automated store and retrieval process (AS/RS) in modern inventories. Then considering the shortcomings of traditional algorithms for storage assignment problem, the paper contributes a new algorithm called SAO/FEM (storage assignment optimization technique), inspired from mechanical engineering discipline for analysis and optimization of storage assignment problem. The proposed new algorithm about stress distribution analogy, and the help of the Finite Element Method and minimum total potential energy theory, proposes a new model for storage assignment optimization. The efficiency of the proposed algorithm in terms of calculation time and the best answer investigated through numerical examples. The article has developed an application for SAO/FEM algorithm as a value creation module and applied new optimized storage positioning in the warehouses.

Introduction

The shift from the era of simple digitization (Third Industrial Revolution) to innovative hybrid technologies (Fourth Industrial Revolution) has forced the production and service system to look for innovative methods to enhance their traditional working process models ( Delaram & Valilai, 2017 ; Delaram & Valilai, 2018a ; Aghamohammadzadeh, Malek & Valilai, 2019 ). The business leaders and senior executives have realized this force and are struggling to make the changes in their business environment by challenging the assumptions of their firms working teams and working procedures and with continuous innovative changes in the processes. Manufacturing industries are now shifting from mass production to custom production ( Cao et al., 2017 ; Zhang, Wu & Ma, 2017 ). The fourth revolution in industry has proposed the paradigm that requires seamless integration among all the stakeholders in the manufacturing processes while they fulfill the requirements of a dynamic behavior in process execution. This revolution emphasizes for system agility, productivity and sustainability (energy) and Waste avoidance ( Ferrera et al., 2017 ; Müller, Kiel & Voigt, 2018 ).

Warehouses and distribution centers play an important role in supply chains of companies. Considering the Information Technology impact on bringing out new business models, the expectations from the warehouses have been changed for enabling more agility and dynamic ( Delaram & Valilai, 2016 ; Delaram & Valilai, 2018b ; Delaram & Valilai, 2019 ). In order to optimize the design and reduce operational costs, many operating warehouse companies have invested on automated operations and many academicians have conducted wide variety of research studies ( Houshmand & Valilai, 2013 ; Valilai & Houshmand, 2014 ). There are different operation research models and software algorithms to take care of solving different class of problems, like slot storage allocation, pick location allocation, replenishments timing, reserve and send storage locations, and routing of an AS/RS which have been recognized as classic models in this research ( Hausman, Schwarz & Graves, 1976 ; Roodbergen & Vis, 2009 ).

As a classical AS/RS storage is static, a calculation related to the optimization, focuses on cost equations and behaves mainly through reducing the factors of time and distance of movements, so the transport factors of goods try to carry out the shortest path and optimal layout of the storage elements in the lowest time ( Sadiq, Landers & Don Taylor, 1996 ; Moon & Kim, 2001 ). Nowadays as quick and time-based ordering are increasing, the warehouses must adapt to support the rate of demands, ordering fulfillment and the factor of inquiry (FOI) of real market requirements. So, many changes must be accomplished in the classical models, like increasing the variety of goods, decreasing the dimensions of goods or increasing the storage space divisions, and the strategy of storage assignment. This change affects the number of variables and finally affects cost-related equations, so it will be difficult to solve the layout and storage problems easily and in a reasonable time ( Brezovnik et al., 2015 ).

In this regard, it is necessary to consider the dynamic storage elements are no longer the same as the classical model and have variable spatial coordinates at any time (depending on speed of use, input and output frequency, etc.) due to cost effects, it is necessary to locate the elements in proper locations in the warehouse. This requires an innovative algorithm that is able to give right resolution and accuracy in spatial coordinates while calculating the ideal location rapidly.

This research first, proposes a new definition of storage assignment and layout method considering the requirements for dynamic storage elements assignment. Then it creates a new algorithm inspired from mechanical engineering fields for analysis and optimization of storage to end constrains of some of the existing algorithms. The new algorithm called SAO/FEM (Storage Assignment Optimization technique, by the help of Finite Element Method and total potential energy theory), proposes a new model for storage assignment optimization. The paper has developed an application for SAO/FEM algorithm and applied the new optimization storage positioning to some warehouses and conducted comparisons with the classical method, first to prove the feasibility and performance of the new algorithm then to present the new advantages. In other words, this algorithm acts as a value-added module to meet the requirements of the fourth generation of the industry in the field of layout and warehousing. As such, this model can satisfy the dynamic storage situation and can respond to the rapid behavior resulting from high-speed production or service systems.

Literature Review

AS/RS design requirements such as strategic location and choosing the right space for warehouse construction are considered with the rigidity of the hardware system for warehouse operations management to fulfill the optimal warehouse model ( Rosenblatt, Roll & Vered Zyser, 1993 ). Selecting the type of warehouse and its operational variables, warehouse configuration, software and hardware systems are among the most important factors shaping the performance of the warehouse. Obviously, the optimal performance of a warehouse always depends on the strategy for handling the material flow inside/outside for optimal performance; so that the warehouse will not fail in future in terms of responsiveness for material store and retrieval operations ( Gagliardi, Renaud & Ruiz, 2012 ).

Literature review on the classical models for AS/RS demonstrates that many research studies have been proposed to optimize warehouse operation through exact or numerical methods. Multivariate problems are solved by approximate, heuristic or Meta heuristic methods. Different research fields on the classic warehouse model and its dependencies such as layout problem is almost saturated, and many of the old definitions need to be redefined based on the new requirements of agile store and retrieval processes ( Rosenblatt, Roll & Vered Zyser, 1993 ; Roodbergen & Vis, 2009 ; Gagliardi, Renaud & Ruiz, 2012 ; Gagliardi, Renaud & Ruiz, 2014 ; Cao & Zhang, 2017 ; Hamzaoui et al., 2019 ; Huh et al., 2019 ; Reyes, Solano-Charris & Montoya-Torres, 2019 ). Moreover, in the area of warehouse physical design and resource configuration and storage strategies in warehouse design, most of the research studies have focused on calculating Single Storage Rack layout and on storage capacity or storage position. There has been a considerable gap for optimal position of the entrance and exit doors ( Rosenblatt, Roll & Vered Zyser, 1993 ). Also, in the field of warehouse operations control, most of the researchers have involved one or a few control mechanisms for order processing in the design of their proposed warehouse structure ( Hamzaoui et al., 2019 ). Although the structural design and control system of warehouse are highly interdependent, in many research studies both have been examined individually and independently. It demonstrates the challenge of mutual optimal number of the goods and the location of the I/O points determination ( Hausman, Schwarz & Graves, 1976 ; Bessenouci, Sari & Ghomri, 2012 ; Gagliardi, Renaud & Ruiz, 2012 ; Gagliardi, Renaud & Ruiz, 2014 ; Hamzaoui & Sari, 2015 ; Guo, Yu & De Koster, 2016 ; Hamzaoui et al., 2019 ).

Storage assignment literature

The literature suggests various methods for products storage location assignment. There are five basic storage assignment policies for AS/RSs ( Roodbergen & Vis, 2009 ; Faber, De Koster & Smidts, 2013 ; Ekren et al., 2014 ). These rules are:

  • • Dedicated storage assignment
  • • Random storage assignment
  • • Open storage assignment in closest location
  • • Full-turnover-based storage assignment
  • • Class-based storage assignment

In dedicated storage method, for each product category a fixed location is considered. And at this location replenishments of that product are considered. The main disadvantage will be space requirements and low space utilization. This is due to limitation of this method in considering the out of stock products in the model and reserve locations for them. Also, to be able to fulfill the placement for probable maximum inventory level, the model should reserve enough spaces based on each product type ( Heskett, 1963 ; Heskett, 1964 ; Zhang, Wu & Ma, 2017 ).

For random storage in all empty locations, equal probability exists for an incoming load assigned to it. The product is assigned to first free location which is encountered. The product demand frequency is considered for full-turnover storage policy for assigning storage locations near to the I/O-points. So, the easiest accessible locations are considered for high frequently demanding products. In addition, the products called slow-moving are located farther away from the I/O-point. For this rule, the product turnover frequencies will an important assumption which should be known first ( Roodbergen & Vis, 2009 ).

The cube-per-order index (COI) rule presented by Heskett ( Eynan & Rosenblatt, 1993 ; Moon & Kim, 2001 ; Roodbergen & Vis, 2009 ). Explains the full-turnover storage method. The COI is defined by the ratio of needed storage space for products to amount of product requests for a load in a specific period. The COI rule manages the assignment policy by placing loads with the lowest COI to the nearest locations to the I/O-point. The main challenge is that the demand frequencies and the product assortment are usually changing. The method will encounter challenge when the frequency of demands for products are changing dynamically or a new product is added to the system. It requires a huge number of calculations for reassignment of products and fulfilling the full turnover rule.

To reduce periodic repositioning and also fulfilling space requirements, class-based storage can be considered. The literature studies show that class-based storage method is the most consistent policy for storage assignment ( Hausman, Schwarz & Graves, 1976 ; Eynan & Rosenblatt, 1994 ). Class based storage method splits the available space in the warehouse into several sectors. Then the demand frequency is considered for each item and items are subsequently assigned to the sectors. Inside a sector area, the assignment is completed randomly. So, the full turnover storage policy is a class-based policy which only consist one item per class. There is a so-called ABC storage which is a class-based storage with three classes ( Graves, Hausman & Schwarz, 1977 ). The A-items are with the highest turnover and then in descending order of turnover frequency products are called B-items, and so on. The main merit of class-based storage is its efficiency for assigning high frequency products close to I/O-points and also benefiting from the flexibility of random storage method and being applicable for low storage space conditions.

There are three decision variables for fulfilling the class-based storage in an AS/RS:

  • • Zone division (i.e., in which the number of classes are determined),
  • • Zone sizing (i.e., the number of products, assigning to each zone),
  • • Zone positioning (i.e., defining the location each of the zones).

To include storage boundaries for above mentioned classes, several procedures have been found in the literature for zone sizing. According to the Eynan and Rosenblatt’s research studies ( Guenov & Raeside, 1992 ; Eynan & Rosenblatt, 1994 ), a relatively small number of classes, less than 10, gain most of the probable savings in travel times as compared to full-turnover storage. However, there are limited number of classes to three in real case problems.

For zone positioning several strategies exist. Research studies ( Graves, Hausman & Schwarz, 1977 ) have proven that when single command scheduling is considered in square-in-time frames, the optimal solution for classes A, B and C with square-in-time boundaries will be an L-shaped configuration. Also, in other research studies ( Kulturel et al., 1999 ) it has been demonstrated by simulation there will be optimality for L-shaped configuration for dual command scheduling, in square-in-time frames. Another research ( Aldenderfer & Blashfield, 1984 ) also studied dual command scheduling and compared three different zone shapes. Their conclusion is that the location of the I/O-point affects the performance of proposed shapes while none will be superior to the others.

Another research study ( Kaylan & Medeiros, 1988 ) has discussed the application of single command scheduling in “n” classes in rectangular frames and combined “n-2” L-shaped zones in layout, i.e., a rectangular zone for class “n” and a transient region for class “n-1”.

Performance of storage assignment rules

An analytical approach for checking control rules can be travel time estimation in scheduling for different command types in different types of AS/RS configurations ( Van Den Berg & Gademann, 2000 ). Using the simulation approach, the stochastic conditions were focused and more comprehensive studies have been accomplished ( Goetschalckx & Ratliff, 1990 ; Jaikumar & Solomon, 1990 ; Muralidharan, Linn & Pandit, 1995 ; Malmborg, 1996 ; Malmborg, 1998 ; Kulturel et al., 1999 ). Before simulation study, a trade-off storage policy was developed for small systems and then based on that for unit load the rough comparisons of different policies were accomplished. Results from both simulation and analytical studies stated that class-based storage assignment and the full-turnover based outperform random storage policy. In Moon & Kim (2001) , considering duration of stay policy, a comparison was made by a three class-based policy which was originally introduced informer studies for shared storage policies in Sadiq, Landers & Don Taylor (1996) . Storage locations are assigned based on a policy which consider the nearest locations to the I/O-point for short staying products in the storage space while duration of stay policy was considered. In cases where the number of the product variety is fewer, the three-class-based policy overtakes the stay policy duration.

For today’s dynamic environment ( Roodbergen & Vis, 2009 ), there is an essential need for proposing storage assignment policies in AS/RS system. This fulfills the requirements for having the required performance level for dynamic storage management ( Heragu et al., 2005 ). The COI policy can be considered as an acceptable policy assignment when the logistics and handling frequency parameters for demand of products are static. (Moon and Kim) ( Heragu, 2008 ) in a simulation study showed that when the production quantity of each storage items is fluctuating by time, there will requirements for item re-location. Sadiq ( Roodbergen & Vis, 2006 ) proposed a dynamic storage assignment policy for reassignments of products in storage locations for short lifecycle products and also the mixtures which were dynamically changing. The research applied a prediction model for future product mixtures by using correlated demand of products as well as product demand. Then a dynamic policy for storage assignment was focused targeting to decrease the order processing times (relocating times and order picking times). The results showed that the proposed dynamic policy fulfilled the static COI shortcomings for dynamic conditions.

Considering the related research studies, by using simulation and analytical methods different storage assignment policies have been compared. Most research studies are limited by considering one-way passage AS/RSs which include only one I/O-point. There is considerable gap for storage assignment policies for other kinds of multiple I/O-points configurations or multiple shuttles equipped AS/RSs. By the increase of variables like number of I/O-points or number of classes in the warehouse, the complexity of forming optimal cost equation and solve parameters increases drastically ( Barkanov, 2001 ). Besides the demands on storage assignment are changing rather than the level of automation, cost effectiveness and maximum throughput, other characteristics that are more important today are flexibility, configurability and high availability. Due to virtual development, the time needed to introduce a new model in the market has been shortened rapidly and the diversity of products is dramatically increasing, also shortening the product life cycle, it shortened planning horizons and smaller amortization periods result in smaller investments. Also, customer request for more individual products leads to mass customization, which in return it leads to smaller quantities of the product in almost every sector of the consumer market. To cope with the fast-changing demands, classic models and related algorithms are not effective and competent and so a new model has to be developed which needs to be dynamic, flexible and alterable.

Storage assignment models

As it was previously said, generally there are five major policies for storage assignment in an AS/RS. The most comprehensive and practical applications currently being used in warehouses are Dedicated storage assignment, Full-turnover-based storage assignment, Class-based storage assignment, or Integrated Optimal use of these policies.

In the first case, any commodity type always belongs to a particular location, but its main drawback is the necessity for large space for storage. Thus, little space left for storage equipment which is generally not mechanized. However, storage of large or heavy- volume goods is the most important advantage of this case.

In the second case, locating construction is based on the frequency of demand for goods, so the goods that are most needed are placed in the places close to entry/exit doors that provide better, simpler and faster access to those goods. In this method, as was explained before COI index rule for each entry is applied. This rule places lower loads with the less COI index in the nearest locations to entry or exit doors. The drawback here refers to any change in demand frequency or product definition (new load) which causes a large volume of displacements and possible layout changes in the warehouse.

In the third case, the space available in the warehouse is divided into several parts and the loads subsequently classified into each space based on demand frequency (each space is allocated only to a specific class), and the location of each class inside, can be randomly assigned. In this way in order to better managing space and time, mainly three classes of goods were considered (ABC Storage) that ranging from Class A (consuming) to Class C (low consumption). The advantage of this case is warehouse flexibility due to random storage of goods (in each class) and optimal use of space in warehouses. The cost model, developed by incorporating the benefits of these policies and currently being used in the literature and layout calculations, is set to minimize the cost of moving items to entry and exit points. The cost per pair of goods/storage space is calculated as the product of three factors ( Ventsel & Krauthammer, 2001 ).

  • • Frequency of trips to each I/O point.
  • • The distance being traversed to any I/O point.
  • • Cost per unit distance (where cost per unit distance is different for different combinations of entry points and exit points for different products).

Dedicated storage policy model

In this model (which is called first model), a warehouse has “p” I/O points through which “m” items enter and leave the warehouse. The items are stored in one of “n” storage spaces or locations. Each location requires the same storage space, and it is known that item i requires s i storage spaces so ideally

There are f ik trips of item i through I/O point k , the cost of moving a unit load of item i a unit distance through I/O point k is c ik and the distance of storage space j from I/O point k is d kj , given a binary decision variable x ij specifies whether or not item i is assigned to the storage space j , so to formulate a model to assign the items to storage spaces in the way that minimize the cost of moving the items in and out of the I/O points one would have:

Subjected to:

Substituting w i j = ∑ k = 1 p c i k f i k d k j ∕ s i

The objective function will be

COI policy model under certain conditions ( Zhang & Kratzig, 1995 )

In this model (second model), a special case of the design model for dedicated storage policy is considered.in this case all the items use the I/O point in the same proportion and the cost of moving a unit load of item i is independent of the I/O point. P k isdefined as the percentage of trips through I/O point “k” where k  = 1, 2, …,  p (for any item because all the items use the I/O points in the same proportion). Due to the additional constraints included in this model, there is no need for the first subscript in f ik or in C ik . Therefor f ik ,  C ik can be replaced by f i ,  C i respectively, and the model may be formulated as follows:

Subjected to constraints Eqs. (3) – (5) .

Substituting

The objective function can write as:

The model given by expression Eq. (8) and constraints Eqs. (3) – (5) , is easier to be solved in comparison with the first model given in expression Eq. (6) and does not require the use of transportation algorithm. It involves rearranging the “cost” term T i s i (if c i f i  =  Ti ) for each item i ( i  = 1,2,…,m). Also the “distance” term w j for each storage space j ( j  = 1, 2, …,  n ) in non-increasing and non-decreasing order, respectively, and matching the item i corresponding to the first element in the ordered “cost” list with the storage space corresponding to the first S i , elements in the ordered “distance” list, the second item, j in the “cost” list with the storage space corresponding to the next S i , elements in the ordered “distance” list and so on until all the items are assigned to all the storage spaces. This is exactly what the COI policy except it calculates the inverse of the “cost” term, COI, that is, the storage space requirement divided by the cost incurred as a result of handling item i (or the frequency of item i ) , and orders the elements in non-decreasing order of their COI values . Thereby producing the same result as the preceding algorithm.

As defined in the algorithm ( Fig. 1 ), the objective function has two parts and the storage optimization on the layout is performed in two stages: The first stage involves partitioning of the storage space, the calculation of distance index ( W j ) will be done on the storage area and the contribution of each unit in the storage division will be clear, the best location is the one that has the lowest index ( Wj ). It is evident that the index value will change according to the location of each block at the warehouse, the number of divisions in the warehouse level, the number of doors, the distance of the doors from the divisions, and the trip percentage of each good ( Persson & Strang, 2004 ).

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g001.jpg

The second stage, is the calculations of ( Ti/Si ) index which is called cost index, any good that has higher cost per unit has higher priority for storage on nearest location. The computation has to be done in a way that the total cost of moving items is minimum and items with higher consumption factor are placed at optimum storage positions.

Storage Assignment Example

The following problem is presented the second model:

A warehouse with 16,000 units area is available as follows: The area size 16,000 units, has two I/O points with 30% and

70% input and exit logs on one side and with 40 equal divisions on the area (20 × 20 = 400) ( Fig. 2 ).

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g002.jpg

Three products of A and B and C are to be stored base on Table 1 specifications:

After calculating the first step of the classical storage algorithm, and implementation of the distance index on the warehouse area, the results are presented in Fig. 3 .

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g003.jpg

According to Fig. 3 , calculations of the spectrum of Fig. 4 shows the prior assignment locations are near to I/O point and as far as getting out of the I/O points, the storage assignment priority is reduced. The algorithm is then executed based on the computation of the Table 1 and outputs are placed in priority order on the partitions by the non–descending trend on the warehouse ( Figs. 4 – 6 ).

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g004.jpg

Methodology

Finithe element method (fem).

Finite element method is a numerical method for solving the differential equations governing the real problem presented in the form of a mathematical model. To achieve accurate, strain and stress result of a structure, the equilibrium equations must be established, and in addition the boundary conditions of the problem have to be satisfied ( NguyenVan, Mai-Duy & Tran-Cong , 2007 ) .

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g005.jpg

Solving equilibrium equations along with the establishment of boundary conditions and adaptation in non-complex problems is not hard work. However, if it is faced with a complex problem (from the point of view of geometry, material behavior, loading, etc.), the exact solution of the above equations is practically very difficult and, in some ways, it is almost impossible ( Fig. 7 ) ( Nadal et al., 2013 ).

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g007.jpg

Therefore, the finite element method is an accepted and efficient method for solving differential equations governing the behavior of the object (problem), whose efficiency and speed of operation have been practically proved and generally in accordance with the following steps ( Zhang & Kratzig, 1995 ; Persson & Strang, 2004 ; NguyenVan, Mai-Duy & Tran-Cong , 2007 ; Nadal et al., 2013 ).

General description of finite element method

  • 1. The discretization of the Mesh structure
  • 2. Presenting the physical behavior governing the element quantitatively.
  • 3. Select an interpolated model or suitable transportation model
  • 4. Extraction of hardness matrix and elemental force vectors.
  • 5. Composition of the equations of elements for extraction of general equilibrium equations.
  • 6. Calculating and extracting displacements in nodes.
  • 7. Calculation of stresses and strains of elements.

Finite element method optimization algorithm steps

  • 1. Preprocessing, first Geometric model of the rectangular plane and mesh model of the problem is constructed. Then the loading and boundary conditions are applied to the model.
  • 2. Solving the finite element model, the integration of the system equations and solving the general equations is carried out at this stage.
  • 3. Post processing, preparing and displaying results.

Stress in thin plates

Man-made objects are often made from stock plates of various materials by operations that do not change their essentially two-dimensional character, like cutting, drilling, gentle bending and welding along the edges. The stress description in thin plates is simplified by assuming the plate with two-dimension surfaces and neglecting the three-dimensional bodies. With this assumption, particles are considered to cover the plate’s surface, so that the boundary between adjacent particles will be tiny line. The surfaces of particles are located such that their normal vectors are straight through the plate. The Stress can be defined as the result of internal forces between two adjacent particles through their overlapping line element divided by the length of that line. Usually, some elements of the stress tensor are ignored, however, since particles are only considered through two-dimension, the third dimension cannot be neglected for the torque among particles positioned in neighbor location ( Chandrupatla et al., 2002 ; Huston & Josephs, 2008 ; Smith, 2013 ).

Conceptual model

In this study, we try to design and develop mapping as an optimal layout allocation model based on natural stress distribution algorithm (considering the behavioral similarity of stress extension pattern), using finite element computational and analytical method. The potential energy minimum theory is used for the analysis and based on the new heuristic algorithm; the claimed capabilities of the new model are verified.

To use stress distribution analogy for storage assignment, first it is needed to define the conceptual mapping model of classical and finite element method, to match up some of the definitions, rules and activities, then an algorithm needed to be designed and implemented to prove that the new model is efficient and shows its new advantages as shown in Table 2 and Fig. 8 .

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g008.jpg

To elaborate this mapping as illustrated in Table 2 , first the properties of the classical model are defined as follows:

A-1) the warehouse is a rectangle.

A-2) The storage area of the warehouse is unobstructed, shelving and surface level.

A-3) the storage area has four walls (parallel to each other).

A-4) the storage area is divided into smaller, uniform areas (square, in this model) according to the requirements of the warehouse.

A-5) in this model, I/O doors are defined and installed on parallel (facing) walls.

A-6) It is possible to define two doors (I/O points) solely on parallel walls, and the doors are logically defined on the wall spacing most separately on the marginal points of the warehouse surface segmentation.

A-7) the storage area is loading or unloading at one period.

A-8) in this model, according to the number of the doors, the share of goods entering / leaving each door is defined as a percentage of the total supply capacity of the warehouse.

A-9) in a coordinated manner, over a period, the warehouse (all doors) is in the state of entry or exit.

Considering the abovementioned properties, the paper presents all the classical features in FEM perspective with the least variations. So, in the new proposed model the features are defined as:

B-1) a rectangular plate is considered as the surface of the storage area.

B-2) the plate is made of soft, homogeneous metal with a low thickness.

B-3) the boundary conditions are applied to the plate in such a way as the sides with no entry or exit doors (I/O points) restricted.

B-4) the warehouse surface is meshed with square elements according to the classical model classification.

B-5) in the new model, the entry/exit doors are defined on the pair of parallel sides with the greatest distance.

B-6) loading points on the same positions of classical I/O positions are defined on the nodes resulting from the meshing on non-restricted sides.

B-7) loading at all points over a period is merely compressive or tensile.

B-8) in the new model, the share of loading on I/O points are calculated as a percentage of unit load, like the proportion of entering/leaving of each door in the classical model.

simulation and analysis

Recalling the contents presented in the sections COI policy model, its numerical example and finite element method, to use the stress flux analogy for storage assignment problems, according the conceptual model, the area in example 1, considered as a plate with the same dimension and divisions, and the input/exit logs’ percentage of goods simulated as external forces in the same positions of the I/O points in the warehouse. The results show that, the stress distribution starts from the location with maximum stress concentration (load points), with maximum total potential energy towards the region which has the lowest stress with the minimum of potential energy ( Fig. 9 ).

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g009.jpg

For more exact analysis, the average stress (calculated in the center of each element) shows that the elements with the maximum stress and highest potential energy are close to the loading points, and the lowest mean stress with lowest potential energy are exactly in counter ( Fig. 10 ).

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g010.jpg

Comparing the result of the classical model (presented in Figs. 3 and ​ and4 4 (‘Storage Assignment Example’)) with the new model ( Fig. 10 ), clarify if the stress distribution calculations by finite element method relocate related to the I/O points, based on minimum stress and the least total energy, locations that have the preferences for storage assignment follow the same pattern as classical model and useable as a new model for storage optimization ( Figs. 11 and ​ and12). 12 ). storage procedure diagrams in Figs. 13 and ​ and14 14 show that both models have almost the same strategy up to step three but the procedure in step four and five are different.

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g011.jpg

SAO/FEM Algorithm - application

The SAO/FEM algorithm ( Fig. 12 ) designed to check the similarity of the two analogies to prove that storage assignment with the new algorithm is possible and have more advantages. SAO/FEM application is designed for solving the same problem in both algorithms simultaneously, analysis the output data, compare the results in a specific computer and show the capabilities of the new proposed algorithm and its advantages over the classical model.

The graphical interface (GUI) shown in Fig. 15 designed for SAO/FEM application. It has three sections (Modeling, FEM and Classic) for data entry where the specifications data must be entered in order. In modeling part there are two sections, the first is the plate’s physical specifications and the second is dimensional information of the model in addition to its divisions biased on the unit of the length. In FEM part, boundary conditions apply to the area of the model, dimensions in no I/O point sections bounded. Then loads at the points that are equal to the I/O points proportional to the percentage of input and output of the goods apply to the model. In classical part, place of I/O points and each points’ entry and exit percentage of the goods define.

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g015.jpg

After data entry, the analysis phase starts, and software output shows the loads expansion and stress analysis (Since the elements with the maximum stress and the highest potential energy are close to the loading points, the lowest mean stress with lowest potential energy is exactly in counter).

After stress analysis, the application prepares a comparable model ( Fig. 12 ) and calculate the average stress in the center of each element and then arrange the elements symmetrically. So, the elements with the minimum average stress and minimum potential energy, finally sort near the I/O point (virtual loading points) and display its graphs. Besides the application runs the classical calculations on the ideal assignment of goods in the warehouse performed and display its graph.

Then application calculate the index Ti/Si (Determination of each goods’ priority of storage) in both algorithms and arranges the prior goods on the elements from the lowest average stress to the highest in FEM model and from nearest distance to farthest distance in classical model. The next phase is comparison phase ( Fig. 12 ). In this phase, SAO/FEM starts normalizing the result matrix of both algorithms and create the new matrixes with the elements calculate from 0 to 1 (independent of dimensions) for both. The graphical format of the results sample is shown in the Figs. 13 and ​ and14, 14 , parts C, D.

Finally, SAO/FEM application compares the normal matrixes to calculate the conformity percentage of the performance of the storage assignment in both algorithms and calculating time. As observed in the SAO/FEM schematic diagram, the output of both methods must pass through data normalization filter to be able to be comparable with one another.

Graphical user interface simulation

Simulation is basically a set of assumptions about the function of the system within the framework of mathematical and logical relations. In this application, MATLAB software is used for computer simulation. Since this software enables easy access to use of matrix, arithmetic and functions, also the use of different algorithms as well as easy communication with different language programming, so it is highly desirable to implement algorithms that have mathematical complexity. Thus, in the study of the software and its well user-friendly capabilities, it has been used to implement the finite element method and integration of the classic method.

Communicate in a graphic, highly efficient format, and this is an attempt to escape the text settings that have to run all instructions and actually enter and exit the data through the program text without treating a GUI.

Numerical examples

In order to verify the capabilities of the proposed solution, some numerical examples have been solved by SAO/FEM application and the results are compered. SAO/FEM application solves each problem simultaneously in both algorithms, calculates the similarity of the performance along with the solution times.

It is notable that the verification started with the scenario 1, in this scenario SAO/FEM has solved the same problems discussed in ‘Simulation and analysis’. Which have been solved manually by classic and separately by FEM software (the storage area of this problem has divided in to 40 similar elements). The SAO/FEM results shows that storage assignment performance of the FEM model is 90% similar to the classical model and has the better performance in calculation time ( Fig. 16 ).

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g016.jpg

In scenario 2, SAO/FEM application solved the storage problem with the storage area 4 times larger than the scenario 1 (the storage area has divided in to 160 similar elements).

The SAO/FEM results shows that storage assignment performance of the FEM model is 95.8% similar to the classical model and has the better performance in calculation time ( Fig. 17 ).

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g017.jpg

Tables 3 and ​ and4 4 show the SAO/FEM application’s different outputs of different problems and confirm the better performance of the FEM model. Figs. 18 – 22 demonstrate the storage assignment performance of the proposed FEM model and its similarity results to the classical model for scenarios in Tables 4 and ​ and5. 5 . The results show that SAO/FEM method is successful for storage assignment optimization, and has more capabilities and advantages:

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g018.jpg

  • 1. Higher computational speed
  • 2. More computational accuracy
  • 3. The ability to prioritize the arrangement without the restrictions of the classical model
  • 4. Ability to apply any changes to storage priority in the shortest time

As a conclusion, in solving the storage assignment problems, the SAO / FEM method is efficient and more effective than the classic model.

An external file that holds a picture, illustration, etc.
Object name is peerj-cs-07-378-g019.jpg

Conclusions

This paper has opened a new topic in solving the storage assignments problems by an application inspired from a powerful computing method such as finite element. The finite element first analyzes the problem with the logical assumptions then, after obtaining the differential equation, solve the problem by numerical methods. In the finite element method, the mass considers as continuum area and divides it to simpler geometric shapes. Define the shape functions that satisfy the boundary conditions the equations of the elements calculated and the limits and the results obtained.

The advantages of the finite element method are the easy to use, speed and look with the approach of its continuity to the issues that have shown interesting results in solving the problems, while it has high ability in discretization, partial solution and analyzes the problem. The classic model determines the ideal assignment with cost equation calculation, which is based only on storage distance from the I/O points and does not rank the distribution. However, the proposed model focuses on the distribution priority for the goods on the storage points, and it is capable to indefinitely number/type of goods (independent of the coefficients and frequency of consumption of goods) with the proper bordering of the warehouse, for the ideal storage priority arrangement. And this calculation is practically difficult for more than three types/products in the classic model, so it is difficult to solve the problem. Also, the product storage flexibility and mathematical tools developed in this direction, along with comparing the problem-solving time to the classical method in the specified order, will further clarify this method’s applicability. The main issue in the accuracy of the results of the finite element method is the size of the grid and the type of element chosen for solving. The type of elective element is not important for assignment of goods in the warehouse, but the size and boundaries of the used grid size (in mesh generation) is important due to the type of warehouse problem in which the divisions determined based on the dimensional specifications of the goods, and since there is not geometric complexity (the warehouse surface is considered flat at the level of storing), the method will have a high accuracy in calculations.

In this paper, a new method called SAO/FEM introduced for storage assignment optimization. And an application based on the new algorithm with respect to the finite element method and minimum potential theorem designed and implemented, and to prove its capabilities, compared to the classic model, with the common problems have been solved. Creating a graphical user interface (GUI interface) is also an effort to enable the computer user to communicate in a graphic, highly efficient format, and this is trying to escape the text settings that have to run all instructions and actually enter and exit the data through the program text without treating a GUI.

Future research studies can consider:

  • 1. SAO/FEM algorithm optimization for better value added in terms of sustainability (energy), storage adjusted for weight characteristics of the different inventories and traveling WORK done with respect to minimum total potential energy definitions.
  • 2. Optimization of SAO/FEM algorithm by changing I/O points’ locations (opposite or adjacent) or I/O Points intermediate distances or grid/mesh size consideration.
  • 3. Practically validation of SAO/FEM algorithm, by using the warehouse information of the companies at the level of the third or fourth generation of the Industry

Also, the flexibility of the product storage and mathematical tools developed in this direction (with the definition and application of uniform force), along with comparing the problem-solving time to the classical method in the specified order, will further clarify the applicability of the proposed method in this paper.

Funding Statement

The authors received no funding for this work.

Additional Information and Declarations

The authors declare there are no competing interests.

Seyed-Kourosh Tabatabaei conceived and designed the experiments, performed the experiments, performed the computation work, prepared figures and/or tables, and approved the final draft.

Omid Fatahi Valilai conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft.

Ali Abedian analyzed the data, authored or reviewed drafts of the paper, and approved the final draft.

Mohammad Khalilzadeh conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft.

Advertisement

Advertisement

Optimizing storage assignment, order picking, and their interaction in mezzanine warehouses

  • Open access
  • Published: 06 February 2023
  • Volume 53 , pages 18605–18629, ( 2023 )

Cite this article

You have full access to this open access article

  • Veronika Lesch   ORCID: orcid.org/0000-0002-7275-0738 1 ,
  • Patrick B.M. Müller 2 ,
  • Moritz Krämer 3 ,
  • Marius Hadry 1 ,
  • Samuel Kounev 1 &
  • Christian Krupitzer 4  

2439 Accesses

3 Citations

1 Altmetric

Explore all metrics

In warehouses, order picking is known to be the most labor-intensive and costly task in which the employees account for a large part of the warehouse performance. Hence, many approaches exist, that optimize the order picking process based on diverse economic criteria. However, most of these approaches focus on a single economic objective at once and disregard ergonomic criteria in their optimization. Further, the influence of the placement of the items to be picked is underestimated and accordingly, too little attention is paid to the interdependence of these two problems. In this work, we aim at optimizing the storage assignment and the order picking problem within mezzanine warehouse with regards to their reciprocal influence. We propose a customized version of the Non-dominated Sorting Genetic Algorithm II (NSGA-II) for optimizing the storage assignment problem as well as an Ant Colony Optimization (ACO) algorithm for optimizing the order picking problem. Both algorithms incorporate multiple economic and ergonomic constraints simultaneously. Furthermore, the algorithms incorporate knowledge about the interdependence between both problems, aiming to improve the overall warehouse performance. Our evaluation results show that our proposed algorithms return better storage assignments and order pick routes compared to commonly used techniques for the following quality indicators for comparing Pareto fronts: Coverage, Generational Distance, Euclidian Distance, Pareto Front Size, and Inverted Generational Distance. Additionally, the evaluation regarding the interaction of both algorithms shows a better performance when combining both proposed algorithms.

Similar content being viewed by others

assignment distribution algorithm

Solving the picker routing problem in multi-block high-level storage systems using metaheuristics

Jose Alejandro Cano, Pablo Cortés, … Alexander Correa-Espinal

assignment distribution algorithm

Optimization of Automated Warehouse Storage Location Assignment Problem Based on Improved Genetic Algorithm

assignment distribution algorithm

Collaborative Optimization Study of Order, Location and Route in Warehouse System

Avoid common mistakes on your manuscript.

1 Introduction

Warehouses play a central role in the supply chain of a company and contribute to its logistical success. When employing humans, picker-to-parts and parts-to-picker methods are differentiated [ 12 ]. Experts estimate the picker-to-parts system to be the most common in Western Europe with a share of over 80% [ 13 ]. A well-known picker-to-parts system is the mezzanine warehouse which we address in this work.

Working within a mezzanine warehouse consists of two main tasks: (i) filling the storage with goods (storage assignment) and (ii) picking items out of the storage (order picking). The storage assignment problem defines the task of selecting storage locations to put a product into storage. The order picking problem defines the task of computing a pick route that collects the requested products of a customer order. Finding suitable storage allocations is important, as the allocation of products affects the travel distances during order picking. Due to the NP-hardness and, hence, the complexity of the storage assignment and the order picking problem, efficient optimization algorithms are required to find satisfying solutions within acceptable times. In the literature, many approaches exist for optimizing both warehouse problems. However, most approaches usually target either of the warehouse problems; some works target both problems, however miss to integrate the interrelation between them and view each problem separately [ 8 ]. However, as identified by [ 9 ], warehouse problems are strongly coupled. Thus, optimizing each warehouse problem individually may yield suboptimal solutions, harming the overall warehouse performance. Since the employees spend most time traveling in such a mezzanine warehouse [ 13 ], it is not surprising that most approaches focus on optimizing the travel distance. Additionally, ergonomic constraints are rarely considered, even though mezzanine warehouses represent labor-intensive working environments.

In this paper, we propose an integrated approach for combined storage assignment and order picking that simultaneously optimizes multiple economic and ergonomic constraints in mezzanine warehouses. Expert interviews have shown, that in practice the following set of economic criteria is important and, hence, supported by our approach: products should be spread equally among each floor, fast-moving products should be easily accessible, correlated products should be stored in proximity of each other, and the storage space should be used as efficiently as possible. Further, we integrate ergonomic constraints such as storing heavy products and fast-moving products at grip height or reducing the requirement to switch a mezzanine floor. In an evaluation using three simulated mezzanine warehouses of different sizes, we analyze the quality of the solutions returned by our algorithms compared to commonly used techniques. Finally, we assess the quality improvement when combining both of our algorithms compared to an isolated application. Hence, the contribution of this paper is threefold:

Design of storage allocation and order picking algorithms that incorporate the interdependence of both tasks.

Integration of diverse economic and ergonomic constraints.

Evaluation of the approach in a use case based on real-world data provided by our cooperation company.

The remainder of this paper is structured as follows. Section  2 presents related work and delineates our paper from existing approaches. Section  3 presents the meta-model and floor layout of considered mezzanine warehouses. Afterwards, Section  4 provides an overview of the goal and a 3-phase algorithm of our storage assignment approach, while Section  5 presents the details of the proposed Genetic Algorithm for storage assignment. Then, Section  6 shows our order picking approach based on an adapted Ant Colony Optimization (ACO) algorithm. Section  7 presents our evaluation methodology and discusses the results and threats to validity. Finally, Section  8 concludes the paper and summarizes future work.

2 Related work

In the literature, diverse storage assignment policies exist such as the dedicated and the random storage policy [ 2 ], the closest open location storage policy [ 13 ], rank-based storage policies [ 24 ]. Further, class-based, golden zone, and family grouping storage policies are introduced in the literature [ 13 , 23 ]. Additionally, diverse approaches apply optimization techniques. Sooksaksu et al. [ 28 ] propose a Particle Swarm Optimization algorithm for warehouses that deploy the class-based storage policy. Kovács [ 14 ] presents a mixed integer programming model for optimizing the storage assignment problem for class-based assigned warehouses. Kofler et al. [ 11 ] apply local search algorithms for reorganizing the products in the warehouse to keep it operating efficiently. Li et al. [ 19 ] propose a multi-objective genetic algorithm for optimizing the storage assignment problem in automated storage/retrieval warehouses.

Similarly, heuristic policies exist for the order picking problem such as the S-Shape, Return, Mid-Point, Largest Gap, and Combined heuristic [ 22 , 26 , 29 ]. Besides, [ 25 ] presents an optimal algorithm using dynamic programming to find the shortest pick route in a single-block warehouse. Additionally, [ 5 ] propose a mathematical model in combination with construction heuristics and apply Tabu Search to construct order picking routes. Ene and Öztürk [ 7 ] present an integer programming model for optimizing the order picking problem. Xing et al. [ 32 ] propose an Max-Min Ant System (MMAS) algorithm for optimizing machine travel paths in automated storage/retrieval warehouses. Chen et al. [ 4 ] propose an ACO algorithm that detects congestion situations that arise when multiple order pickers traverse the same pick aisle simultaneously. Lee et al. [ 16 ] presents a comparison on different multi-objective evolutionary algorithms for order picking and storage assignment. Other approaches apply the information of orders and order picking to optimize the storage assignment [ 21 , 30 ]. However, such approaches are not flexible enough in our setting which is not order-driven and, hence, dynamic interactions between storage assignment and order picking are required to be considered.

Finally, related work also assess the interaction of storage assignment and order picking approaches. Petersen and Schmenner [ 24 ] and [ 8 ] provide an overview of well-performing combinations of storage assignment strategies and routing heuristics. Manzini et al. [ 20 ] analyze different parameters that affect the travel time in single-block warehouses that deploy the class-based storage policy. Shqair et al. [ 27 ] study the effects of different parameters on the travel distance in multi-block warehouses.

Our work delineates from these existing approaches in diverse aspects. First of all, our work applies optimization techniques and does not rely on a policy on how to select fitting storage racks or shortest pick routes. Second, regarding existing optimization approaches, our work integrates multiple objectives at once considering economic as well as ergonomic constraints at once while most of the other approaches focus on a single economic goal. Similarly, the authors of [ 3 ] also integrate ergonomic considerations, but do not focus on optimizing the storage assignment but rather the order picking only. Finally, in contrast to existing work that address the influence of storage assignment and order picking tasks, we designed algorithms that optimize the targets of both tasks. Hence, they optimize storage assignment and order picking with regards to the interdependence of both algorithms, while other works only provide well-performing combinations of algorithms or perform parameter tuning. Similarly, [ 16 ] also integrates both activities, but do not focus on a multi-objective approach,especially neglecting the ergonomic constraints. Also the work of [ 17 ] integrates both activities, however, they apply a digital twin based approach rather than considering multi-objective optimization algorithms.

3 Meta-Model of considered mezzanine warehouses and overview on the processes

The storage assignment and order picking algorithm require information on the warehouse layout, the product assortment, the products’ storage locations, and the current state of the warehouse. A list of all used variables in this section can be found in the Appendix in Table  11 . Figure  1 illustrates our proposed meta-model. The blue box describes the floor layout defining the arrangement of racks within one floor of the mezzanine warehouse ( FloorLayout ). Each floor consists of the classes, P/D-Point , WidePickAisle , and Rack . A p / d-point is the pickup and delivery point where personal needs to collect items to be stored in the warehouse or deliver items of a customer order that were collected. Regard the class WidePickAisle , two types of pick aisles exist: wide and narrow pick aisles. In wide pick aisles, pickers can take along their pick cart to cross the aisle while it needs to be parked at the aisle entry for narrow pick aisles. A floor can be illustrated as a two-dimensional map as depicted in Fig.  2 : The racks with their unique identifiers r 3 and r 4 are assigned the floor coordinates x = 1 and y = 2 since their access points are both located at (1|2). The vertical aisles located at x = 0 and x = 4, as well as the horizontal cross aisles at y = 0, y = 4, and y = 7, form the periphery of the floor. Periphery aisles usually contain the p / d-points (e.g. at (2|0)). A wide pick aisle is depicted at x-coordinate two and two narrow pick aisles are shown at x-coordinates one and three, where the picker needs to park his pick cart. Real-world mezzanine warehouses may apply different layouts on each floor; however, we assume that each floor in the mezzanine warehouse applies the same layout.

figure 1

The meta-model describes the structure and state of mezzanine warehouses

figure 2

Example mezzanine floor layout from top-down view

Since diagonal movements are not possible in this layout, the Manhattan distance function is applied to calculate the distance between two locations p and q (with n being the number of dimensions of the coordinates for p and q ):

The classes inside the yellow box ( Compartment and RackConfiguration ) define the configuration of a rack, referring to its size, the number of shelf levels, and the number of compartments per shelf level. The Compartment class includes an identifier and a three-dimensional vector specifying the compartment’s dimensions. The shelf level and the shelf level position defines the compartment’s location within the rack. The class Product defines the products using five properties: product number, size, weight, rank, and order frequency. The rank (≥ 1) allows identifying fast and slow-moving products by the frequency at which the product appears in recent customer orders. The product of rank 1 represents the most frequently ordered product. The order frequency describes the frequency to which a product is usually ordered using a gaussian distribution. Both properties are derived from recent customer orders and represent redundant information which prevents the algorithms from recalculating this information each time they need it. Further, these properties are later used in the storage assignment optimization to find better racks regarding their frequency and usual ordered amount. The class ProductAssignment specifies the quantity of which a product is assigned to a specific compartment. The classes Order and OrderLine of the orange package define the structure of a customer order consisting of a unique order number and multiple order lines. An order line specifies the quantity to which a product is ordered. The class AssociationRule defines association rules derived by the Apriori algorithm [ 10 ]. The confidence ranges from 0 to 1 and expresses the strength of the correlation between the left-sided and the right-sided set of products. These rules are used in the storage assignment algorithm later on to store correlated products close to each other which may increase the order picking performance.

Figure  3 presents the two processes for the storage assignment as well as order picking. Storage assignment relies on the NSGA-II algorithm for optimal distribution of the items in the mezzanine warehouse (see Fig.  3a ). First, the algorithm distributes the incoming product across the mezzanine floors with the goal to reduce the need for changing floors during order picking. Second, on each floor, the algorithm searches for appropriate racks to store the incoming product based on the following four objectives: frequently requested quantity, spread items, distance to pick-up/delivery points, and correlated products. The NSGA-II algorithm shall optimize the four objectives. Third, the NSGA-II algorithm selects compartments for storing the incoming product based on two criteria. (1) Fast-moving products should be assigned to compartments at grip height. (2) Heavy products should be assigned to lower-level compartments. Order picking also follows a three-step approach (see Fig.  3b ). First, the algorithm constructs a graph representation based on the information of the pick list as well as the mezzanine floor layout. Second, the ACO algorithm applies artificial ants that explore the graph. Third, the quality of the computed pick route is assessed based on the travel distance (economic goal) and the weight violations of the items (ergonomic goal). In the following, we discuss those different algorithms.

figure 3

Schematic overview on the storage assignment (above) and order picking (below) processes

4 Storage assignment

The overall goal of the storage assignment algorithm is to select a set of compartments for storing an incoming product by considering multiple economic and ergonomic constraints simultaneously.

4.1 Constraints and assumptions

In expert interviews, we identified multiple hard constraints that should be covered in our approaches. These hard constraints specify whether a storage allocation is considered feasible and a feasible solution never violates any of these constraints: Each incoming item must be assigned to a compartment ( H C 1 ). The selected compartment must either be empty or partially occupied by items of the same product ( H C 2 ). Each item has to fit in the remaining free space if its compartment ( H C 3 ). Furthermore, we define multiple soft constraints that measure the extent to which a storage allocation fulfills economic criteria: The products should be evenly spread on each floor ( S C 1 ). Fast-moving products should be assigned close to a p/d-point ( S C 2 ). The mean ordered quantity of a product should be locally available ( S C 3 ). Correlated products should be stored close to each other ( S C 4 ). The storage space should be used as efficiently as possible ( S C 5 ). Finally, we define two ergonomic soft constraints: Heavy products should be stored at grip height ( S C 6 ). Fast-moving products should be assigned to compartments at grip height ( S C 7 ).

Further, we state the following assumptions for our approach: The state of the warehouse does not change while the storage assignment algorithm is running. Thus, the products are not repositioned nor removed, and the racks’ configurations do not change. Further, the algorithm allocates only one product at a time. The storage racks may apply different rack configurations and products may only be assigned to fitting compartments. A compartment is allowed to store multiple items of the same product but may not store two different products at the same time. Finally, product ranks and association rules are derived from recent customer orders.

4.2 3-Phase storage assignment algorithm

Our storage assignment algorithm consists of three phases that intend to reduce the complexity of the optimization problem: (i) assignment of products to floors, (ii) assignment to racks w.r.t. economic criteria, and (iii) assignment to compartments w.r.t. ergonomic criteria.

In the first phase, the incoming product quantity is split among the mezzanine floors ( S C 1 ) so that each floor provides the same quantity of the product. This way, we try to reduce the required floor changes during a pick route to a minimum. Thus, we first determine the total quantity of the incoming product that is already available in each floor, calculate the ideal quantity for each floor after storage assignment, and assign the missing quantity to each floor. Remaining items, due to rounded results, are allocated to a random floor.

The second phase addresses the economic soft constraints S C 2 to S C 5 and aims to reduce travel distances during order picking. This phase assigns the incoming products to racks on a specific floor. Since this phase requires optimizing a set of constraints, we apply a multi-objective optimization algorithm that is described in Section  5 .

The third phase aims to satisfy the ergonomic soft constraints S C 6 and S C 7 . We classify a product p into three weight classes: light (up to 3 kg), medium (between 3 kg and 7 kg), and heavy (over 7 kg). We set the grip height to be between 0.75 m to 1.25 m and refer to compartments below/above the grip height as low/high zone compartments. Additionally, we distinguish fast-moving , moderately-moving , and slow-moving products by their relative rank. The relative rank of a product p calculates as r a n k p /| P |, where r a n k p denotes the rank of product p , and | P | the size of the product assortment. In the first step, the incoming items are assigned to the rack’s compartments that already provide items of the same product. In the second step, the remaining incoming items are assigned to the rack’s unoccupied compartments based on predefined penalty values. The penalty values range from zero to three and the more a compartment comp is unsuited for storing the product p , the more penalty points are given (see Tables  1 and  2 ).

5 Genetic algorithm for storage assignment

This section presents our custom version of the NSGA-II algorithm that was proposed by [ 6 ]. The algorithm receives the current state of a floor and assigns the incoming items to a set of racks on this floor. Note that the NSGA-II is executed for each floor individually. A list of all used variables in this section can be found in the Appendix in Table  12 .

5.1 Chromosome encoding

We propose the chromosome encoding depicted in Fig.  4 .

figure 4

A chromosome encodes the racks selected for storing the incoming items

The figure illustrates an example allocation task where ten items of product p 1 must be assigned to the racks on f l o o r 1 . The black numbers indicate the existing items of product p 1 , while the red numbers indicate the incoming items of product p 1 . The right side shows the chromosome that encodes the storage allocation depicted on the left side by specifying the racks selected for storing each incoming item. Since ten items of product p 1 are assigned, the chromosome’s length equals 10.

5.2 Objective functions

A set of objective functions guide the NSGA-II algorithm to find good storage allocations. We propose four domain specific objective functions for our maximization problem: (i) spread score, (ii) distance score, (iii) quantity score, (iv) correlation score.

5.2.1 Spread score

This score addresses constraint S C 1 and aims to equally spread the incoming quality of product p across the entire floor. Hence, we divide the floor f j into multiple areas A of equal size. To calculate the spread score, we use the total ( t o t a l Q ) and ideal quantity ( i d e a l Q ) of a product in an area of a floor. The t o t a l Q is the sum of the existing and incoming items in an area, while the i d e a l Q is calculated by dividing the sum of the existing and incoming quantity of product p on the floor by the number of defined areas. The final spread score for chromosome C is calculated as the sum of differences between the total and the ideal quantity for all areas (see ( 2 )).

5.2.2 Distance score

This score addresses constraint S C 2 and aims to allocate slow-moving products to racks further away from the p/d-points. Hence, the distance score quantifies the extent to which the walking distances ( \(dist_{r_{i}}\) ) of the selected racks match the ideal distance ( \(idealDist_{p, f_{j}}\) ). For calculating the i d e a l D i s t , we perform the following steps: First, we determine the relative rank of the incoming product p by dividing the rank of the product ( r a n k p ) by the size of the product assortment P : r e l R a n k p = r a n k p /| P |. Then, the relative rank is mapped to a rack index: \(rackIdx_{p,f_{j}} = relRank_{p} \cdot |R_{f_{j}}|\) with \(R_{f_{j}}\) being the list of racks of floor f j sorted by the racks’ walking distances to their closest p/d-point. The rack in \(R_{f_{j}}\) at index r a c k I d x represents the best-suited rack for storing product p with regard to constraint S C 2 . Finally, the i d e a l D i s t computes as: \(idealDist_{p,f_{j}} = R_{f_{j}}[rackIdx_{p,f_{j}}] \cdot distance\) . The overall distance score calculates as the sum over all racks in chromosome ( C ) of differences between the walking distances of the racks selected for storing product p and the i d e a l D i s t (see ( 3 )).

Further, we provide an example of this calculation in Fig.  5 .

figure 5

Calculating the ideal distance for storing the incoming product p 98

5.2.3 Quantity score

This score assesses S C 3 and ensures that the mean ordered quantity of a product is locally available. Therefore, the target quantity defines the quantity to which the product p should be locally available based on a set of recent customer orders: t q p = ⌈ μ p + 2 σ p ⌉ with μ p as expected value for the orders and σ p for the standard deviation. Further, we define four masks and a modifier for each mask to measure the density to which the t q p is locally available: M 1 equals the size of a rack ( m a s k M o d = 1), M 2 equals the size of two facing racks ( m a s k M o d = 0.75), M 3 is a sliding window with half the sub aisle’s length ( m a s k M o d = 0.5), and M 4 covers an entire sub aisle ( m a s k M o d = 0.25). Using these masks, we calculate a quantity factor for each sub aisle ( sa ) of a floor and each mask ( M k ). Therefore, we select the quantity ( q ) of products inside a mask divided by the target quantity: \(qFactor_{p,f_{j},sa_{l}} = max(q_{p,f_{j},sa_{l}}(M_{k}) / tq_{p})\) . This results in a value of 1 if the target quantity is met and a value of 0 if no products can be found within this mask. This quantity factor is then multiplied by the m a s k M o d to calculate the mask score: \(maskScore_{p,f_{j},sa_{l}}(M_{k}) = maskMod_{M_{k}} \cdot qFactor_{p,f_{j},sa_{l}}(M_{k})\) . The highest possible mask score is 1, indicating that the target quantity is available in a single rack of the sub aisle. Based on these mask scores, the maximum value is selected to assign a score to each sub aisle: \(subAisleScore_{p,f_{j},sa_{l}} = max_{k=1}^{4} maskScore_{p,f_{j},sa_{l}}(M_{k})\) . The final quantity score computes as the sum of all s u b A i s l e S c o r e s (with | S A | as the number of sub aisles):

Figure  6 illustrates the idea of using masks of different sizes to measure the density to which the target quantity t q p of product p is locally available. The left side shows the storage locations of existing and incoming items of product p in a specific sub aisle sa . The center of the figure depicts the four masks M k that iterate over the racks of the sub aisle. During this process, the masks count the existing and incoming quantities of product p that can be found in the covered regions. The right side shows the regions where the masks find the largest quantity of product p in the sub aisle sa .

figure 6

The masks M k count the quantities of product p in the covered regions

5.2.4 Correlation score

This score relates to S C 4 and describes the extent to which the incoming product is stored close to its correlated products. Association rules describe correlations between products and can be derived from recent customer orders. We consider association rules of the form \(rule = \{p\} \xrightarrow {conf} \{cp\}\) , where p denotes the incoming product, cp the correlated product, and conf a confidence value. We first calculate the number of possible clusters of target quantities of the incoming product: \(qClusters_{p,f_{j}} = \lfloor totalQ_{p,f_{j}} / tq_{p} \rfloor \) . We use this value to define the ideal quantity to which the correlated product should be available in the vicinity of the incoming product: \(idealCorrQ_{rule,f_{j}} = \lceil qClusters_{p,f_{j}} \cdot tq_{cp} \cdot conf(rule)\rceil \) . In the next step, we determine the quantity of cp that already is available in the vicinity of p . For this task, the previously introduced masks M k are used and are placed directly on top of the racks containing cp . Again, the q F a c t o r is calculated to capture the extent to which the target quantity of p is available in the region covered by M k placed on top of rack r : q F a c t o r p , r ( M k ) = q p , r ( M k )/ t q p . Then, we calculate the fraction to which the items of cp stored in r are considered to be in the vicinity of p : \(corrQ_{rule, r}(M_{k})= exQ_{cp, r} \cdot qFactor_{p, r}(M_{k}) \cdot maskMod_{M_{k}}\) . e x Q c p , r refers to the existing quantity of the correlated product cp in rack r . Afterward, we select the c o r r Q with the highest value representing the mask with the largest amount of p in the vicinity of cp : \(corrQ_{rule, r}= \max \limits _{k=1}^{4}~corrQ_{rule, r}(M_{k})\) . The sum of all c o r r Q r u l e , r over all racks on this floor denotes the quantity of the cp on this floor that is considered as being in the vicinity of p : \(corrQ_{rule, f_{j}}= {\sum }_{rack \in R_{f_{j}, cp}}~corrQ_{rule, r}\) . Now, we calculated the quantity of the correlated product that is in the vicinity of the incoming product and the difference of this value to the ideal quantity. Based on this difference, the correlation score is calculated as (with A p as the set of associations rules) (Fig.  7 ):

figure 7

Calculating the quantity of the correlated product p 2 that is available in the vicinity of the incoming product p 1

5.3 Genetic operators

The NSGA-II is a genetic algorithm and requires the definition of selection, crossover, and mutation operators.

5.3.1 Selection

We apply a binary tournament selection operator where two random parent individuals compete against each other [ 6 ]. The individual with the higher Pareto rank is declared the winner and is allowed to participate in the crossover procedure. In case both parents are of equal Pareto rank, the individual with the larger crowding distance, i.e. the higher diversity, wins the tournament.

5.3.2 Crossover

Since all chromosomes created during a run of the NSGA-II algorithm are of equal length, we use the traditional single-point crossover operator. It selects a random crossover point on both parents’ chromosomes, splits them, and recombines them cross-wise to obtain two new children.

5.3.3 Mutation

We define eight mutation operators that incorporate domain-specific knowledge to guide the search process: (1) The FillRack mutator selects a random rack and fills it with incoming items from the same sub aisle. (2) The MoveRack mutator selects a random rack containing at least one incoming item and moves them to a different rack within the same sub aisle. (3) The FillSubAisle mutator selects a random sub aisle and fills it with incoming items from other sub aisles until it provides the product’s target quantity. (4) The ClearSubAisle mutator selects a random sub aisle and moves any incoming items to a different sub aisle. (5) The RedistributeExceedingQuantities mutator redistributes incoming items of racks that provide more items than the target quantity to racks that require only a few items to provide the target quantity. (6) The ShiftRacks mutator shifts all incoming items towards a randomly selected direction: left, right, up, or down. (7) The SwapSubAisles mutator first groups the sub aisles into pairs and swaps incoming items randomly within each pair. (8) The SwapRacks mutator is similar to (7) but swaps items within pairs of racks instead of sub aisles.

5.4 NSGA-II algorithm

The overall procedure of our NSGA-II algorithm is summarized in Algorithm 1.

figure a

Proposed NSGA-II Algorithm.

The algorithm receives the product to be stored and its quantity as well as the list fittingRacks . Further, the parentPopSize defines the size of the parent population, the mutation probability is given by mutProb , the number of generations to be used when calculating the standard deviation of the maximum crowding distance s t d ( L ) is called L , the threshold for the standard deviation of the crowding distance is δ l i m , and the maximum number of generations is called maxGen . In the end, the algorithm returns a paretoFront of the best storage assignments.

In the first step, the algorithm initializes the population by randomly creating the required amount of chromosomes. Therefore, the algorithm selects fitting racks for the product randomly which might produce invalid solutions due to exceeded rack spaces. Each invalid chromosome is then repaired by moving the amount of exceeding products to another available rack. Then, the generation counter gen and the history of observed maximum crowding distances are initialized. Then, the while loop starts and iterates using the two following stopping criterions: (i) the number of maximum generations ( maxGen ) is executed, or (ii) the standard deviation of observed crowding distances ( s t d ( L )) falls below the given threshold ( δ l i m ). Inside the while loop, the generations counter is incremented, and a complete new children population in the size of the parent population is bred using the proposed selection, crossover and mutation operators ( createChildrenPopulation ). This set is added to a combined population of existing parent individuals and select the best individuals to fill the new parent population ( createNextParentPopulation() ). Afterwards, a Pareto front is calculated from this parent population ( calculateParetoFront() ) and the maximum crowding distance of this front is calculated. This value is added to the history of maximum crowding distances. In case, the while loop stops, the current Pareto front is returned.

Since the NSGA-II algorithm returns a Pareto front, a user is usually required to identify the most valuable trade-off solution. However, we automate this step by applying the following procedure. For each of the four objective functions ( o f i ), we select the solution ( s j ) of the Pareto front with the highest value ( o f i ( s j )) for this function. We then use these values as a 4-dimensional reference point ( p r e f = [ e 1 , e 2 , e 3 , e 4 ]). Based on the Euclidean distance, the solution that is closest to the reference point is automatically selected as the most valuable trade-off solution.

6 Order picking

This section introduces our order picking approach that is based on ACO. The overall goal of this algorithm is to construct a pick route for a given customer order. Since the travel distance is an essential economic goal, the pick route should be as short as possible. Additionally, the pick route should also be ergonomically favorable. The need for changing floors should be minimal to reduce the order picker’s physical stress. Further, the product picking sequence is relevant as if light products are picked first, the order picker might need to rearrange the already picked products so that light products are placed on top of heavy products. Hence, the order picking algorithm aims to construct a short pick route that collects heavy products first and changes floors as little as possible to address economic and ergonomic criteria. A list of all used variables in this section can be found in the Appendix, in Table  13 .

The main idea of this approach is to represent a mezzanine warehouse as a graph and let ants search for satisfactory order picking sequences. We make the following assumptions to better deal with the complexity of the order picking problem: (i) The state of the mezzanine warehouse does not change while the algorithm is running, that is no repositioning or removal of products is performed. (ii) The start and ending p/d points of a pick route may differ. (iii) Narrow sub aisles may only be traversed to the sub aisles’ midpoint, as the picker always must go back to the cart in the wide pick aisle. (iv) The order pickers visit only one rack each time they enter a sub aisle. (v) Picking carts withstand infinite loads and can carry an unlimited amount of items.

6.1 Constraints

For the order picking algorithm, we define a set of hard and soft constraints. The hard constraints assess the feasibility of a solution, while the soft constraints measure the extent to which the solution fulfills economic and ergonomic goals. We define the following hard constraints: The pick route must start and end at a p/d point ( H C 1 ). The pick route must collect the requested quantities of the products specified in the pick list ( H C 2 ). After entering a narrow sub aisle, the route must always return to the sub aisle’s entrance ( H C 3 ). Further, we define one economic soft constraint: The travel distance should be minimal ( S C 1 ); And two ergonomic soft constraints: The need for changing floors should be minimal ( S C 2 ). Heavy products should be picked first, followed by lighter products ( S C 3 ).

6.2 Graph representation

We propose the following procedure for transferring a mezzanine warehouse into a graph representation. Figure  8 illustrates the procedure of dividing the warehouse into multiple zones called market zones.

figure 8

The floor is divided into multiple market zones

Each market zone is represented by a market, and thus, a node in the graph. The figure depicts the state of f l o o r 1 that consists of three cross aisles, three wide (pick) aisles, and two p / d-points. We define six market zones obtained by dividing the floor along the wide (pick) aisles into multiple vertical lanes. In the depicted example, l a n e 1 refers to the area from a i s l e 0 to p i c k A i s l e 3 , and l a n e 2 refers to the area from p i c k A i s l e 3 to a i s l e 6 . A c r o s s L a n e ( c , l ) refers to the part of the cross aisle c that lies within the lane l . For each cross lane, we define a market zone that comprises the storage racks that can be visited from the respective cross lane up to their midpoints. For example, the red market zone includes the racks that can be visited if the order picker is located at the c r o s s L a n e (1,1) . The market zones are limited to the midpoints of the corresponding sub aisles, which prevents the ants from constructing pick routes that entirely traverse the pick aisles. A market is referred to as m a r k e t ( f , c , l ) , where f denotes the floor, c the cross aisle, and l the lane. For each market, we define three attributes: (1) the market’s coordinates , (2) the market’s closest p / d-point , and (3) the market’s supply that specifies which products are available at which quantity. After defining all markets, they are connected via edges to create a complete directed graph. The edges’ weights represent the Manhattan distances between the markets. If the warehouse consists of a second f l o o r 2 , the markets on f l o o r 1 are also connected to the markets on f l o o r 2 and vice versa, with an extra f l o o r P e n a l t y added to the edges’ weights.

6.3 Pick route construction

An ant colony explores the graph to construct a set of pick routes, i.e., a sequence of markets that provide the products, for a given pick list. A pick route consists of two layers: (i) representing markets, and (ii) rack sequences.

Figure  9 depicts an example pick route created by a single ant of the colony.

figure 9

A pick route consists of a market sequence and a rack sequence

The market sequence (layer one) of a pick route is computed by an ant that is placed on a market within the graph. Guided by the pheromone trails, the ant visits neighboring markets until it collected the requested product quantities specified in the pick list. The ant manages a purchasing list that specifies the missing items. The pick route is complete after the ant’s purchasing list is empty. Further, each ant must decide whether it enters the market zone from the left or from the right side which depends on the position of the previously visited market. The left/right entrance is located at the position where the cross lane has its lowest/highest x coordinate value. The decision from which side the ant enters the market zone depends on the position of the previously visited market. While constructing pick routes, the ant applies a heuristic function to identify the markets within its vicinity that seem attractive to visit next.

The second layer represents the rack sequence, i.e., the racks the ant visited in each market. When calculating the rack sequence, the ants use the following priority rules: (1) Racks that provide heavy products should be visited first. (2) Racks located closer to the sub aisle’s entrance should be visited second. (3) Racks that provide the largest quantities should be visited third.

6.4 Heuristic function

To identify the most promising paths and assess the attractiveness of a market, the ants apply a heuristic function. The attractiveness of a market is based on two factors: (i) the closeness of the market to the ant’s current location, and (ii) the availability of required items. Thus, we define the heuristic function as follows:

where \(\eta _{m,n}^{k}\) is the heuristic value that the ant k currently located at market m associates with the edge ( m , n ) leading to market n . U k is the set of markets the ant has not visited yet and d m , n > 0 refers to the Manhattan distance between the markets. \({I_{n}^{k}} \in [0;1]\) denotes the percentage to which the required items of ant k are available at market n . The higher the heuristic value, the more attractive is the market for the ant.

6.5 Objective functions

After retrieving possible pick routes from the algorithm, we use two objective functions to asses the quality of the route.

6.5.1 Travel distance

This objective function calculates the travel distance of a pick route and measures the extent to which the soft constraint S C 1 and S C 2 are satisfied. We define a pick route P to be P = ( M , R ) where M = ( m 1 ,..., m k ) refers to the market sequence and R = ( r 1 ,..., r l ) refers to the rack sequence. We then define the objective function as follows:

where we sum up the distance from the start p / d-point to the first market, the sum of the distances within each entered sub aisle ( \(d_{r_{i}}^{sub}\) ), the sum of the distances within the cross lanes ( \(d_{m_{i}}^{cross}\) ), the distances between the visited markets ( \(d_{(m_{i}, m_{i+1})}^{market}\) ), and the distance from the last visited market to its closest p/d-point ( \(d_{m_{k}}^{pd}\) ).

6.5.2 Weight violation

The second objective function measures the extent to which a pick route satisfies the soft constraint S C 3 and counts the number of weight violations in the product picking sequence. A weight violation occurs if a heavy product is collected after a much lighter product. In this case, the order picker must rearrange the lighter products already placed on the picking cart to prevent damage. The user-specified threshold allowedWeightDifference defines the acceptable weight difference between the heavier and the lighter products. Using this threshold, we count the number of weight violations in a product picking sequence.

6.6 ACO algorithm procedure

This section proposes our proposed ACO algorithm and shows the pseudo-code in Algorithm 2. First of all, the algorithm constructs the graph and initializes the pheromones. The pheromones are initialized with their maximum possible value determined by τ m a x . Additionally, a minimum pheromone can be specified by using the value τ m i n in the parametrization of the algorithm. Then, a while loop starts and uses the concept of cataclysms [ 4 ] and a maximum number of iterations as stopping criterion: The parameter maxCataclysms specifies the maximum number of cataclysms that may occur. The parameter maxconsIterWoImpr defines the time window in which the ACO algorithm must improve the current Pareto front to prevent the cataclysm operator from being applied. The parameter maxIter defines the maximum allowed number of iterations regardless of happened cataclysms. ****

Inside the loop the number of current iterations is incremented and pick routes are constructed. The general idea is to place one ant on each market of the graph from which the ant starts to create a pick route. The next market is selected based on the pheromone values and the heuristic function. We propose two different versions of the ACO to combine these values as explained later. For each found pick route, the reverse pick route is calculated by reversing the market sequence, toggling the sides from which the ant entered the markets, and recalculating the rack sequence. We store the pick routes the ants construct in each iteration in the variable pickRoutes . In the next step, the Pareto-optimal pick routes of this iteration are selected by calculating the objective function and the Pareto rank of all routes. Afterward, the iteration-best ( pickRoutes ib ) and the global-best pick routes ( nextPickRoutes gb ) are merged into a single set and the Pareto-optimal pick routes in this set represent the next set of global-best pick routes. The iteration-best pick routes and the global-best pick routes are used to perform the pheromone update, which is explained later. In the further course of the iteration, the ACO algorithm checks whether the cataclysm operator must be applied and compares the global best pick routes of the last and the current iteration. If the ACO algorithm succeeded in improving the Pareto front, the set pickRoutes gb is updated, and the counter variable consIterWoImpr is reset to 0. However, if no improvement was made, this counter variable is incremented. If multiple consecutive iterations fail to achieve an improvement, the search is considered stuck, and the cataclysm operator is applied. In case the cataclysm is applied, the global-best pick routes pickRoutes gb are included in the set pickRoutes cataclysm , the pheromones on the edges representing the pick routes in pickRoutes gb are reset to the lowest possible value, and the set pickRoutes gb is emptied. Then, the number of cataclysms is incremented and the counter variable consIterWoImpr is reset to 0. After the main loop terminates, the global-best pick routes pickRoutes gb of the last iteration are included in the set pickRoutes cataclysm and the algorithm returns the Pareto-optimal pick routes in this set.

6.7 ACO 3 variant

In the following, we introduce two variants of our algorithm that show a distinct pheromone handling. Both variants are inspired by [ 1 ] that propose four different variants to handle multi-objective problems with an ACO. We select the two best performing variants (ACO 3 and ACO 4 ) and integrate them in our approach to compare which variant produces the best results in our problem domain. This section introduces the ACO 3 variant that applies one ant colony using a single pheromone matrix τ 1 for optimizing both objectives simultaneously. In each construction step, the probability of selecting an edge calculates as:

where \(prob_{m,n}^{k}\) denotes the probability of ant k located at market m to select the edge ( m , n ) leading to market n . \(\tau _{m,n}^{1}\) refers to the pheromone value of edge ( m , n ). \(\eta _{m,n}^{k}\) denotes the heuristic value (see Formula ( 6 )) that the ant associates with the edge ( m , n ). The parameters α and β control the importance of the pheromone values and heuristic values. Lastly, U k represents the set of markets that ant k has not visited yet.

When performing the pheromone update, the ACO 3 variant rewards in 90% of the time the iteration-best pick routes and in 10% of the time, the global-best pick routes (found since the last cataclysm) to update the pheromone matrix τ 1 . The pheromone values are updated according to the following rule [ 1 ]:

where ρ refers to the evaporation factor and \({{\varDelta }} \tau _{m,n}^{1}\) is the amount of pheromone that is added to the edge ( m , n ). PF refers to the Pareto front containing the solutions to be rewarded.

6.8 ACO 4 variant

The ACO 4 variant also applies one ant colony but a pheromone matrix τ 1 for optimizing the first objective function, and another pheromone matrix τ 2 for optimizing the second objective function. When deciding which edge to explore next, an ant randomly chooses a pheromone matrix. In each construction step, the probability of selecting an edge calculates as [ 1 ]:

where \(\tau _{m,n}^{r}\) refers to the pheromone value of edge ( m , n ) w.r.t. pheromone matrix τ i . At the end of an iteration, the ACO 4 variant updates the pheromone matrix τ i by rewarding the iteration-best pick route \(PR_{ib}^{i}\) that minimizes the objective function o f i [ 1 ]:

where ρ again refers to the evaporation factor and \({{\varDelta }} \tau _{m,n}^{i}\) is the pheromone added to the edge ( m , n ) in pheromone matrix τ i . \(PR_{gb}^{i}\) refers to the global-best pick route that minimizes the i th objective function of all pick routes constructed since the last cataclysm occurred.

7 Evaluation

This section presents the evaluation of our approaches. It defines the used warehouse models for applying our algorithms, presents performance indicators, summarizes alternative policies to which we compare our algorithms, and provides the parameter settings of our algorithms. Afterwards, we first evaluate our storage assignment and order picking algorithms individually before we evaluate the interaction of both algorithms.

7.1 Mezzanine warehouse models

The NSGA-II and the ACO algorithm are evaluated in three artificial mezzanine warehouses of different sizes that are defined in cooperation with our cooperation company to build real-world test cases. The warehouses are shown in Fig.  10 : W H s m a l l (yellow), W H m e d i u m (orange), and W H l a r g e (red). For the small, medium, and large warehouses, we define the size of the product assortment to be 500, 1000, and 1500, respectively. Since each product requires a weight, we define three normal distributions and a probability to determine the weight using this distribution: 25% to use \(\mathcal {N}(2, 1.0^{2})\) , 50% to use \(\mathcal {N}(5, 2.0^{2})\) , and 25% to use \(\mathcal {N}(8, 1.0^{2})\) . Using these distributions and probabilities, we aim at a representative set of product weights where most of the products have a medium weight and some products have low and some have heavy weights. The products might also have correlations to up to three other products: With a probability of 30%, 40%, 20%, and 10% a product has no, one, two, or three correlated products, respectively, with a randomly generated correlation confidence between 10% and 90%. For evaluating the order picking algorithm, we fill the storage up to 50% of the available storage space and randomly generate 100 customer orders based on the product assortment and given correlations between products. Each customer order comprises 20 items to pick that are selected as follows: We split the product assortment into four equally sized groups based on the product rank. With a probability of 40%, 30%, 20%, and 10% an order contains an item of the highest, second highest, third highest, and lowest rank class, respectively, which ensures that high-ranked products appear more often in customer orders.

figure 10

The warehouses W H s m a l l , W H m e d i u m , and W H l a r g e use different floor layouts

7.2 Performance indicators for assessing Pareto fronts

Since we assess a multi-objective optimization problem, the algorithms compute a Pareto front. We use the following quality indicators for Pareto fronts introduced by [ 31 ]. The Coverage (Cov) quality indicator quantifies the extent to which a computed Pareto front covers the reference Pareto front. The quality indicators Generational Distance (GD) and Euclidean Distance (ED) measure the distance from a computed Pareto front to a reference Pareto front or a reference solution. The quality indicators Pareto Front Size (PFS) and Generated Spread (GS) measure the diversity of the solutions that exist in a computed Pareto front. Finally, the quality indicator Inverted Generational Distance (IGD) combines convergence and diversity aspects. Since most of these quality indicators require the calculation of a reference Pareto front, we use the Pareto fronts returned by all algorithms as basis. From these Pareto fronts, we select the non-dominated solutions of the union of all computed Pareto fronts and use this front as reference Pareto front.

7.3 Alternative policies

We use the following alternative policies for the storage assignment problem. The random storage assignment policy allocates the incoming items to random racks on the floors in clusters of target quantity size [ 2 ]. In the closest open location storage assignment policy , the warehouse employees select the storage locations for storing an incoming product, which are usually the racks closest to the p / d-points [ 13 ]. The rank-based storage assignment policy assigns fast-moving products close to the p / d-points, while slow-moving products are assigned to racks further away [ 24 ].

For the order picking problem, we apply a modified S-Shape heuristic for comparison that constructs s-shaped pick routes based on the graph representation [ 22 ]. This heuristic uses all markets as starting point iteratively as well as the reversed versions of each route to generate a Pareto front of possible solutions.

7.4 Algorithm parameter settings

Based on a preliminary parameter study, we parameterize our NSGA-II algorithm as follows: We set the mutation probability to 0.95 for all warehouse sizes so that the mutation operators are applied very frequently. Additionally, we set the crossover probability to 1.00 so that the crossover operator is applied for all generations. Further, we define parameters dependent on the warehouse size (small/medium/large): The parent population size is set to (50/60/70), and the maximum number of generations to (200/250/300). These values increase with the size of the warehouse since the number of possible solutions increases with the warehouse size and we provide the algorithm more exploration possibilities (population size) and more time (number of generations) for optimizing the solutions.

Further, we set the parameters for our ACO algorithm as follows: In line with the literature, we set the pheromone factor α to 1.0 and the heuristic factor β to 2.0. We set the evaporation factor ρ to 0.02 causing the pheromones to evaporate rather slowly which enables the algorithm to achieve a higher degree of exploration especially in the early stages. The min/max values for the pheromone matrices ( τ m i n / m a x ) are set to 1 and 25, respectively, add a floor change penalty of 50, and set the allowed weight difference to 3 kg. As stopping criterion, we set the maximum number of cataclysms to 3 and hence, the algorithm terminates after it became stuck for the third time. Using the results of a preliminary parameter study, we set the maximum consecutive iterations without improvements to 20, and the maximum iterations to 250 since these values yield the best results w.r.t the scores.

7.5 Evaluation of the NSGA-II algorithm for storage assignment tasks

We evaluate our NSGA-II algorithm against the random, closest open location, and rank-based storage assignment policies. We apply all approaches on the three warehouse sizes (Setting 1.a, 1.b, 1.c) and on five randomly generated storage assignment tasks, i.e., we select a random product from the product assortment and set the quantity to be assigned to the quantity already existing in the warehouse. We repeat the execution of the NSGA-II algorithm ten times to reduce random effects and present mean and standard deviation values. All generated solutions of all algorithms are then used to calculate the reference Pareto front required for the quality indicators. Table  3 summarizes the mean values and Table  4 shows the standard deviation values for this evaluation. The coverage (Cov) results for the small warehouse show that the NSGA-II Pareto front covers about 90% of the reference Pareto front while the other approaches cover only around 9% and 1% Since, the NSGA-II shows the lowest GD and ED values, this Pareto front is located closest to the reference front. The NSGA-II algorithm finds around 48 solutions per problem instance with a maximum possible value of 50 solutions for the small warehouse size. The other policies only construct 22 to 28 Pareto-optimal solutions while their maximum possible value is set to 500. Further, the NSGA-II achieves the lowest GS and IGD values which indicates, that the solutions converge well towards the reference Pareto front and offer diverse solutions.

In the medium warehouse, the results show similar behavior. The table shows that the Pareto front \(PF_{nsga_{2}(c_{m})}\) covers approximately 93% of the reference Pareto front P F r e f . Except for some outliers, the alternative policies struggle to cover the solutions in P F r e f . The observed GD and ED values are fairly similar to the values in Setting 1.a. However, the standard deviations of the ED metric increased noticeably, which may be related to the larger search space where the solutions tend to be more spread out. Nevertheless, the Pareto front \(PF_{nsga_{2}(c_{m})}\) still achieves the lowest GD and ED values, indicating that this Pareto front converges best towards P F r e f . Concerning the PFS metric, the NSGA-II algorithm finds about 53 solutions per problem instance, while the alternative policies find approximately less than 20 solutions per problem instance. The GS values of \(PF_{nsga_{2}(c_{m})}\) increased remarkably, which may be due to the larger parent population size and the larger search space that make it difficult for the NSGA-II algorithm to fill the gaps in the Pareto front so that all solutions are evenly distributed. Lastly, the IGD values of \(PF_{nsga_{2}(c_{m})}\) are close to 0, indicating that \(PF_{nsga_{2}(c_{m})}\) represents the entire reference Pareto front P F r e f in most cases.

Similarly, the large warehouse shows comparable results. The Pareto front \(PF_{nsga_{2}(c_{l})}\) covers about 99% of the reference Pareto front P F r e f , while P F r a n k covers only 1%. Thus, almost all solutions found by the rank-based policy are dominated by the solutions found by the NSGA-II algorithm. The mean GD value of \(PF_{nsga_{2}(c_{l})}\) equals 0, indicating that the entire Pareto front \(PF_{nsga_{2}(c_{l})}\) is part of P F r e f in almost all cases. Other than that, the observations made in the previous Setting 1.b also occur in this setting. Thus, the standard deviations of the ED metric further increase, the NSGA-II algorithm finds the most solutions per problem instance, the alternative policies find fewer solutions per problem instances, the GS values of \(PF_{nsga_{2}(c_{l})}\) further increase, and the IGD values \(PF_{nsga_{2}(c_{m})}\) are closer to 0.

In summary, the results show that the random and the closest open location policy struggle to cover even a single solution in the reference Pareto front. Additionally, the NSGA-II algorithm outperforms the alternative policies in smaller warehouses. Further, the NSGA-II finds on average the most solutions per problem instance and the solutions are less equally distributed.

In addition to the quality evaluation, we also measure the mean execution time of the approaches for solving 50 problem instances in each warehouse size Footnote 1 . The alternative policies achieve low execution times of about 0.17/0.30/0.50 seconds for small/medium/large which is due to their comparably simple operation. In the warehouse small/medium/large, the NSGA-II algorithm achieves execution times of about 2/6/15 seconds, which is due to the increase population and iteration count for larger warehouses. The execution times of the NSGA-II algorithm may be considered acceptable, as the algorithm requires only a few seconds to find storage allocations that are remarkably better than the ones found by the alternative policies.

7.6 Evaluation of the ACO algorithm for order picking tasks

We evaluate both versions of our ACO algorithm against the modified S-Shape heuristic. We apply all approaches on the three warehouse sizes (Settings 2.a, 2.b, 2.c) using five customer orders randomly selected from the set of generated customer orders as explained earlier and repeat the execution of the ACO algorithms ten times to reduce random effects and present mean and standard deviation values. Then, we use all generated solutions the algorithms to calculate the reference Pareto front required for the quality indicators. Table  5 summarizes the mean values and Table  6 shows the standard deviation values for this evaluation. For the small warehouse, the Pareto fronts of the A C O 3 and A C O 4 variants cover 74% of the reference Pareto front while the S-Shape heuristic fails to cover even a single solution. Both ACO algorithms achieve nearly the same GD and ED values and the close to zero GD values show that many solutions are part of the reference front. The ACO algorithms find around ten solutions per problem instance, while the S-Shape only finds three solutions per problem instance. The S-Shape achieves the lowest, hence, the best GS values, but it is not meaningful to compare these values to the ACO ones as it contains only three solutions that are considerably worse than solutions of the ACO algorithms. The ACO algorithms achieve low IGD values, indicating that both Pareto fronts converge well towards the reference front and provide diverse solutions.

figure b

Proposed ACO Algorithm.

The metrics of the medium warehouse show similar behavior as in the previous setting. Like in the previous setting, the Pareto front P F s S h a p e fails to cover even a single solution in the reference Pareto front P F r e f . The Pareto front \(PF_{aco_{3}}\) covers approximately 69% of P F r e f , while \(PF_{aco_{4}}\) covers only 33%. Thus, the ACO 3 variant tends to find better pick routes than the ACO 4 variant. The Pareto front \(PF_{aco_{3}}\) achieves the lowest GD and ED values among all computed Pareto fronts. Thus, \(PF_{aco_{3}}\) converges best towards P F r e f , which is not surprising, as \(PF_{aco_{3}}\) covers most of the solutions in P F r e f . The GD and ED values of \(PF_{aco_{4}}\) are slightly larger than the ones of \(PF_{aco_{3}}\) , indicating that \(PF_{aco_{4}}\) does not converge as well as \(PF_{aco_{3}}\) towards P F r e f . Compared to the previous setting, the GD and ED values of P F s S h a p e increased, which may be due to the larger search space. Concerning the PFS metric, the ACO 3 variant finds about 12 solutions per problem instance, followed closely by the ACO 4 variant that finds around 11 solutions per problem instance, while the S-Shape heuristic only finds about 4 solutions per problem instance. Regarding the GS metric, the solutions in \(PF_{aco_{4}}\) are slightly better distributed than the solutions in \(PF_{aco_{3}}\) . With respect to the IGD metric, \(PF_{aco_{3}}\) achieves the lowest IGD values, indicating that \(PF_{aco_{3}}\) converges well towards P F r e f and offers a high diversity of solutions.

In the large warehouse, the S-Shape heuristic is again unable to cover a solution in P F r e f . The Pareto front \(PF_{aco_{3}}\) covers 84% of the reference Pareto front P F r e f , while \(PF_{aco_{4}}\) covers only 16%. Thus, most of the solutions found by the ACO 4 variant are dominated by the solutions found by the ACO 3 variant. Accordingly, the ACO 4 variant has problems to compete with the ACO 3 variant in larger warehouses. Compared to the previous setting, the GD and ED values of \(PF_{aco_{4}}\) further increased, indicating that the distances between the solutions in \(PF_{aco_{4}}\) and the solutions in P F r e f became larger. The Pareto front \(PF_{aco_{3}}\) converges best towards P F r e f , as it achieves the lowest GD and ED values. Regarding the PFS metric, both ACO variants find about 10 solutions per problem instance. The GS metric indicates that the solutions in \(PF_{aco_{4}}\) are marginally better distributed than the solutions in \(PF_{aco_{3}}\) . Finally, the Pareto front \(PF_{aco_{3}}\) achieves the lowest, and thus, best IGD values, signalizing that \(PF_{aco_{3}}\) converges best towards P F r e f and offers diverse solutions.

In summary, the ACO algorithms outperform the S-Shape heuristic in all warehouse sizes, and A C O 3 and A C O 4 show similar performance in smaller warehouses. With increasing warehouse size, the solutions found by the A C O 3 variant dominate more and more solutions of the A C O 4 variant. Hence, the A C O 3 variant starts to find better pick routes than the A C O 4 variant, while the A C O 4 variant produces slightly better distributed solutions.

In addition to the quality evaluation, we also measure the mean execution time of the approaches for solving 50 problem instances in each warehouse size. The S-Shape heuristics takes around 0.15 seconds to compute routes. The A C O 3 and the A C O 4 variant achieve fairly the same execution times in all warehouse sizes of around 1/3/6 seconds for W H s m a l l / W H m e d i u m / W H l a r g e . As the warehouse size increases, the graph consists of more markets causing more ants to be deployed in each iteration. Still, we consider the ACO execution times acceptable, as they require only a few seconds to find noticeably better pick routes.

7.7 Evaluation of computational costs

Besides evaluating the performance of the proposed algorithms in terms of solution quality, we also measured execution times and evaluated the computational costs. The experiments were conducted on a MacBook Pro with a 2.2GHz Intel Core i7 processor, 16GB of RAM, and macOS Sierra 10.12.6. Each time measurement was repeated 50 times. Tables  7 and  8 report the mean and standard deviation for the initialization and execution phases in seconds.

Table  7 shows the initialization times for the storage assignment and order picking algorithms. The initialization time increases with the warehouse size as more data must be requested from the database. However, the measured initialization times must be analyzed with caution, as the initialization time depends more on the size of the database tables than on the warehouse’s size. Initialization times for the order picking algorithms are much higher due to the graph construction, which makes the initialization time proportional to the number of markets. This also indicates that tuning the SQL queries would only partly enhance the initialization time of the order picking but could have a more significant impact on the order picking. The graph construction could be parallelized further to increase the initialization time for order picking, as no data dependencies exist between the markets.

Table  8 shows the execution times for the proposed algorithms as well as the execution time of the baselines. For storage assignment, alternative policies achieve low execution times of up to 0.5 seconds, which is not surprising, as they require only a few calculation steps to assign the incoming items to racks on the floor. The NSGA-II algorithm needs more time to compute the storage allocations and requires up to 15 seconds for the large warehouse. However, the execution times of the NSGA-II algorithm may be considered acceptable, as the algorithm requires only a few seconds to find storage allocations that are remarkably better than the ones found by the alternative policies.

For order picking, the alternative S-Shape heuristic requires 0.15 regardless of warehouse size. The AC0 3 and the AC0 4 achieve fairly the same execution times across the different warehouse sizes, with approximately six seconds for the largest. The execution time increases with the warehouse size since the graph consists of more markets, causing more ants to be deployed in each iteration. Considering the noticeably better routes compared to the S-Shape heuristic, the execution time of a few seconds to find a picking route may be considered acceptable.

7.8 Evaluation of the interaction between NSGA-II and ACO algorithm

In the previous section, we have shown the applicability of our algorithms for storage assignment and order picking in dedicated analyses. The results indicate that both algorithms outperform state-of-the-art solutions for those tasks. In this section, we evaluate the interaction between our proposed algorithms by assessing them in three settings: Section  7.8.1 determines whether the A C O 3 performs better on the NSGA-II planned warehouse compared to the random warehouse; Section  7.8.2 performs a similar assessment for the A C O 4 ; Section  7.8.3 evaluates whether the A C O 3 or the A C O 4 perform better on the NSGA-II planned warehouse. Tables  9 and  10 summarize the results.

7.8.1 Comparison of NSGA-II and random planned warehouses for ACO 3

In this setting, we apply the A C O 3 algorithm on all warehouse sizes (Setting 3.a, 3.b, 3.c) twice: once for the warehouse that used the NSGA-II algorithm for storage assignment and once for the randomly assigned warehouse. Again, we select five random items from the product assortment and set the amount to assign to the already existing amount inside the warehouse. In the small warehouse, the Pareto front of the NSGA-II planned warehouse covers the entire reference front while the random planned warehouse does not cover a single solution in the reference front, hence, the GD and IGD values of the NSGA-II planned warehouse are 0 and the ED values are minimal. The high GD and ED values of the random planned warehouse indicate that its Pareto front does not converge well towards the reference front. Thus, the solutions found in the random warehouse are considerably worse than the solutions found in the NSGA-II warehouse. In the medium warehouse, the Pareto fronts of random and NSGA-II planned warehouses achieve fairly the same quality indicator values as in the previous setting. However, the GD and ED metric indicate that the results for the random warehouse unexpectedly converge better towards the reference front than in Setting 3.a. This could be due to the limited amount of executed problem instances and needs to be further assessed with a higher number of problem instances. Nevertheless, the Pareto front of the random warehouse is still far from converging towards reference front. In the large warehouse, the same observations can be made as in the previous settings, underlining that the A C O 3 variant finds better pick routes in the NSGA-II warehouse than in the random warehouse. In summary, the evaluation results show that the NSGA-II algorithm and the A C O 3 variant interact well together and the A C O 3 variant profits from the NSGA-II algorithm that ensures our four economic constraints.

7.8.2 Comparison of NSGA-II and random planned warehouses for ACO 4

This setting repeats the Settings 3.a to 3.c for the A C O 4 algorithm. We now discuss the results for the Settings 4.a to 4.c. In the small warehouse, the Pareto front of the NSGA-II warehouse covers the entire reference Pareto front and the random warehouse fails to cover a single solution. Thus, all solutions found in the random warehouse are dominated by the solutions of the NSGA-II warehouse. The GD and ED values for the random warehouse are higher than the ones of the NSGA-II warehouse which shows that the Pareto front of the random warehouse is further away from the reference front. Hence, the solutions of the random warehouse are noticeably worse than the solutions found in the NSGA-II warehouse. In the medium warehouse, the Pareto front of the NSGA-II warehouse does not always cover the entire reference front, while the random warehouse covers at least one solution in the reference front in 7 of 50 repetitions. Thus, the A C O 4 variant occasionally finds a few solutions in the random warehouse that are comparable with the solutions found in the NSGA-II warehouse. Despite these few outliers, the results show a similar behavior as in the previous setting. Similar to the previous settings, the evaluation in the large warehouse show comparable results. The NSGA-II warehouse covers all solutions in the reference front in 49 of 50 repetitions and the random warehouse manages to cover at least one solution in the reference front. In summary, the results show that the A C O 4 variant also finds better pick routes if the warehouse applies the NSGA-II storage strategy and both algorithms interact well with each other.

7.8.3 Comparison of ACO 3 and ACO 4 on NSGA-II planned warehouses

This section investigates which ACO variant performs better if the warehouse applies the NSGA-II storage strategy. Both variants are applied on five randomly selected customer orders on all warehouse sizes (Settings 5.a, 5.b, 5.c). In the small warehouse, the Pareto front of the A C O 3 algorithm covers approximately 80% of the reference front, while A C O 4 covers only 66%. Both ACO variants converge well towards the reference front as indicated by the low GD and ED values and find approximately ten solutions per problem instance, and IGD values of both fronts are almost the same. However, the GS values indicate, that the solutions of the A C O 4 variant have a better distribution than the ones of A C O 3 . In the medium warehouse, the Pareto front of A C O 3 covers approximately 69% of the reference front, while the one from A C O 4 covers only 41%, and thus, the solutions found by A C O 4 tend to be dominated by the ones from A C O 3 . The GD and ED metric indicate that the A C O 3 Pareto front converges better towards the reference front. Similar to the small warehouse, the GS indicate, that solutions of the A C O 4 Pareto front are better distributed. In the large warehouse, the A C O 3 dominates A C O 4 even more with regards to the coverage metric. Furthermore, the GD and ED values of A C O 4 increased, indicating that the distance between the solutions in A C O 4 Pareto front and the solutions in the reference front become larger. Again, the solutions in the A C O 4 Pareto front have a slightly better distribution than the solutions in the A C O 3 Pareto front. However, this time, A C O 3 achieves better IGD values, as A C O 3 covers large parts of the reference front. In summary, we can state that with increasing warehouse size, the A C O 3 variant finds better pick routes than the A C O 4 variant while both variants find approximately the same number of solutions per problem instance. However, the solutions found by the A C O 4 variant are slightly better distributed than the ones from the A C O 3 variant.

7.9 Threats to validity

We identified the following threats to validity of our evaluation. First, the NSGA-II and ACO algorithms are evaluated in three mezzanine warehouses of different sizes. However, real-world mezzanine warehouses may consist of more floors, blocks, pick aisles, and racks than specified in the warehouses used for evaluation. Nevertheless, we are convinced that our defined warehouses form a representative set for mezzanine warehouses and can easily be extended for further evaluation runs. Second, since the algorithms are evaluated in warehouses that apply either the random or the NSGA-II storage strategy, the evaluation results may not be transferable to warehouses that apply different storage strategies. Even though the product assortment, the product correlations, the customer orders, and the storage allocations are randomly generated reflecting specific characteristics of real-world mezzanine warehouses, the proposed algorithms are easily transferable to real application data. Third, we decided to compare the ACO algorithm only to one order picking policy. This decision was made in awareness of the limited expressiveness of our results but was necessary as the majority of policies in the literature violate assumptions made in this work. Finally, we only evaluate our NSGA-II and ACO algorithms against heuristic policies. Hence, they also should be evaluated against other optimization methods like other evolutionary optimization algorithms or graph-based optimization techniques. However, we decided to do this evaluations as future work. Additionally, we performed less than 30 runs for each setting due to the complexity and runtime of the runs. Obviously, 30 runs and more might result in more stable and reliable results. However, the report the standard deviations. As those are pretty low, we assume that the results are already stable.

8 Conclusion

Due to the complexity of the storage assignment and the order picking problem, efficient optimization algorithms are required to find satisfactory solutions within reasonable times. This paper proposes an NSGA-II algorithm for optimizing the storage assignment problem, and an ACO algorithm for optimizing the order picking problem in mezzanine warehouses. The algorithms incorporate knowledge about the interdependency between both problems to improve the overall warehouse performance. Besides optimizing economic constraints, the algorithms also optimize ergonomic criteria, as mezzanine warehouses represent labor-intensive working environments in which the employees account for a large part of the warehouse performance. We evaluate the NSGA-II algorithm against three storage assignment policies frequently applied in practice: the random, the closest open location, and the rank-based policy. The evaluation results show that the NSGA-II algorithm outperforms the alternatives already in smaller warehouses and the larger the warehouse, the better the NSGA-II algorithm prevails against the alternative policies. We evaluate the ACO algorithm against the S-Shape heuristic that is frequently applied in practice. Our evaluation results show that the ACO outperforms the S-Shape heuristic in all tested warehouse sizes. Finally, we evaluate the interaction between the NSGA-II and the ACO algorithm. The evaluation results show that both ACO variants find better pick routes if the warehouse assigns its products by applying the NSGA-II algorithm instead of the random storage strategy, thus, the NSGA-II and the ACO algorithm interact well with each other.

In the future, we plan to integrate additional features to further increase the applicability of the storage assignment. First, we want to allow state changes of the mezzanine warehouses while the storage assignment is running which would make some of the solutions in the Pareto front infeasible. Further, we plan to parallelize the storage assignment algorithm so that the sequential assignment of products is replaced and the execution times will decrease. Regarding the order picking algorithm, we also aim at parallelizing the execution to reduce the required calculation times. Finally, we want to research on integrating forecasts (e.g., based on our previous work [ 33 ]) of future assignment and order picking tasks to proactively replace goods within the storage that will be ordered in the near future. Aditionally, a promising option can be to study further algorithms for optimization and different parameter setting. On the one hand, one option is to analyze additional optimization techniques. For example, some multi-objetive evolutionay algorithms with great performance have been reported recently, such as, Multi-strategy co-evolutionary differential evolution for mixed-variable optimization, firefly algorithms with courtship learning, or multi-objective artificial bee colony algorithms. However, those algorithms were applied in different domains. Still, it would be interesting as future work to analyze them in our target domain. On the other hand, we showed In previous work that choosing the best optimization algorithm is situation-aware [ 18 ] and, hence, requires to be modeled as a self-adaptive system [ 15 ]. Implementing such an approach could also help to identify the best fitting algorithms for a new warehouse design.

We run our experiments on a MacBook Pro using macOS Sierra 10.12.6, a 2.2GHz Intel Core i7 CPU and 16GB DDR3 RAM.

Alaya I, Solnon C, Ghedira K (2007) Ant colony optimization for Multi-Objective optimization problems. In: 19Th IEEE international conference on tools with artificial intelligence(ICTAI 2007), vol 1, pp 450–457

Bartholdi JJ III, Hackman ST (2019) Warehouse and distribution science. Supply chain and logistics institute, school of industrial and systems engineering Georgia institute of technology

Calzavara M, Glock CH, Grosse EH, Sgarbossa F (2019) An integrated storage assignment method for manual order picking warehouses considering cost, workload and posture. Int J Prod Res 57(8):2392–2408. https://doi.org/10.1080/00207543.2018.1518609

Article   Google Scholar  

Chen F, Wang H, Qi C, Xie Y (2013) An ant colony optimization routing algorithm for two order pickers with congestion consideration. Comput Ind Eng 66(1):77– 85

Daniels RL, Rummel JL, Schantz R (1998) A model for warehouse order picking. Eur J Oper Res 105(1):1–17

Article   MATH   Google Scholar  

Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197

Ene S, Öztürk N (2012) Storage location assignment and order picking optimization in the automotive industry. The Int J Adv Manuf Technol 60(5-8):787–797

van Gils T, Ramaekers K, Caris A, de Koster RBM (2018) Designing efficient order picking systems by combining planning problems: State-of-the-art classification and review. Eur J Oper Res 267:1–15

Article   MathSciNet   MATH   Google Scholar  

Gu J, Goetschalckx M, McGinnis LF (2010) Research on warehouse design and performance evaluation: a comprehensive review. Eur J Oper Res 203(3):539–549

Izquierdo J, Campbell E, Montalvo I, Pérez-garcía R (2016) Injecting problem-dependent knowledge to improve evolutionary optimization search ability. J Comput Appl Math 291:281– 292

Kofler M, Beham A, Wagner S, Affenzeller M, Reitinger C (2010) Reassigning storage locations in a warehouse to optimize the order picking process. In: Proceedings of the 22th european modeling and simulation symposium, pp 77–82

de Koster MBM (2007) Warehouse assessment in a single tour. In: Facility logistics, pp 53–74. Auerbach publications

de Koster R, Le-Duc T, Roodbergen KJ (2007) Design and control of warehouse order picking: a literature review. Eur J Oper Res 182(2):481–501

Kovács A (2011) Optimizing the storage assignment in a warehouse served by milkrun logistics. Int J Prod Econ 133(1):312–318

Krupitzer C, Roth FM, VanSyckel S, Schiele G, Becker C (2015) A survey on engineering approaches for Self-Adaptive systems. Pervasive and Mobile Computing Journal 17(Part B):184–206

Lee IG, Chung SH, Yoon SW (2020) Two-stage storage assignment to minimize travel time and congestion for warehouse order picking operations, vol 139

Leng J, Yan D, Liu Q, Zhang H, Zhao G, Wei L, Zhang D, Yu A, Chen X (2021) Digital twin-driven joint optimisation of packing and storage assignment in large-scale automated high-rise warehouse product-service system. Int J Comput Integr Manuf 34(7-8):783–800

Lesch V, Noack T, Hefter J, Kounev S, Krupitzer C (2021) Towards situation-aware meta-optimization of adaptation planning strategies. In: 2021 IEEE International conference on autonomic computing and self-organizing systems (ACSOS), pp 177–187. https://doi.org/10.1109/ACSOS52086.2021.00042

Li M, Chen X, Liu C (2008) Pareto and niche genetic algorithm for storage location assignment optimization problem. In: 3Rd international conference on innovative computing information and control, pp 465–465. IEEE

Manzini R, Gamberi M, Persona A, Regattieri A (2007) Design of a class based storage picker to product order picking system. Int J Adv Manuf Technol 32:811–821

Mirzaei M, Zaerpour N, de Koster RB (2022) How to benefit from order data: correlated dispersed storage assignment in robotic warehouses. Int J Prod Res 60(2):549–568. https://doi.org/10.1080/00207543.2021.1971787

Petersen CG (1997) An evaluation of order picking routeing policies. Int J Oper Prod Managt 17(11):1098–1111

Petersen CG, Siu C, Heiser DR (2005) Improving order picking performance utilizing slotting and golden zone storage. Int J Oper Prod Manag 25(10):997–1012

Petersen CG II, Schmenner RW (1999) An Evaluation of Routing and Volume-based Storage Policies in an Order Picking Operation. Decis Sci 30(2):481–501

Ratliff HD, Rosenthal AS (1983) Order-picking in a rectangular warehouse: a solvable case of the traveling salesman problem. Oper Res 31(3):507–521

Roodbergen KJ, de Koster R (2001) Routing methods for warehouses with multiple cross aisles. Int J Prod Res 39(9):1865–1883

Shqair M, Altarazi S, Al-Shihabi S (2014) A statistical study employing agent-based modeling to estimate the effects of different warehouse parameters on the distance traveled in warehouses. Simul Model Pract Theory 49:122–135

Sooksaksun N, Kachitvichyanukul V, Gong DC (2012) A class-based storage warehouse design using a particle swarm optimisation algorithm. Int J Oper Res 13(2):219–237

Vaughan T (1999) The effect of warehouse cross aisles on order picking efficiency. Int J Prod Res 37(4):881–897

Wang M, Zhang RQ, Fan K (2020) Improving order-picking operation through efficient storage location assignment: a new approach. Comput Ind Eng 139:106186. https://doi.org/10.1016/j.cie.2019.106186

Wang S, Ali S, Yue T, Li Y, Liaaen M (2016) A practical guide to select quality indicators for assessing pareto-based search algorithms in search-based software engineering. In: Proceedings of the 38th international conference on software engineering, pp 631–642

Xing B, Gao WJ, Nelwamondo FV, Battle K, Marwala T (2010) Ant colony optimization for automated storage and retrieval system. In: IEEE Congress on evolutionary computation, pp 1–7. IEEE

Zuefle M, Bauer A, Lesch V, Krupitzer C, Herbst N, Kounev S, Curtef V (2019) Autonomic forecasting method selection: Examination and ways ahead. In: 2019 IEEE International conference on autonomic computing (ICAC), pp 167–176. https://doi.org/10.1109/ICAC.2019.00028

Download references

Open Access funding enabled and organized by Projekt DEAL. No funding was received to assist with the preparation of this manuscript.

Author information

Authors and affiliations.

University of Würzburg, Würzburg, Germany

Veronika Lesch, Marius Hadry & Samuel Kounev

University of Applied Sciences Würzburg-Schweinfurt, Würzburg, Germany

Patrick B.M. Müller

io-consultants GmbH, Co. KG, Heidelberg, Germany

Moritz Krämer

University of Hohenheim, Stuttgart, Germany

Christian Krupitzer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Veronika Lesch .

Ethics declarations

Conflict of interests.

The authors have no relevant financial or non-financial interests to disclose

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: : Variable Summary

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Lesch, V., Müller, P.B., Krämer, M. et al. Optimizing storage assignment, order picking, and their interaction in mezzanine warehouses. Appl Intell 53 , 18605–18629 (2023). https://doi.org/10.1007/s10489-022-04443-x

Download citation

Accepted : 28 December 2022

Published : 06 February 2023

Issue Date : August 2023

DOI : https://doi.org/10.1007/s10489-022-04443-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Storage assignment
  • Order picking
  • Interaction
  • Genetic algorithm
  • Ant colony optimization
  • Mezzanine warehouse
  • Find a journal
  • Publish with us
  • Track your research

Search code, repositories, users, issues, pull requests...

Provide feedback.

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly.

To see all available qualifiers, see our documentation .

  • Notifications

Group-Based Distributed Auction Algorithms for Multi-Robot Task Assignment

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

COMMENTS

  1. Assignment problem

    Algorithms A naive solution for the assignment problem is to check all the assignments and calculate the cost of each one. This may be very inefficient since, with n agents and n tasks, there are n! ( factorial of n) different assignments.

  2. sql

    Fair assignment distribution algorithm Ask Question Asked 10 years, 4 months ago Modified 4 years, 1 month ago Viewed 770 times 3 I'm trying to come up with an efficient sql to solve a fair distribution problem. My data model consists of a 'customer' which can have 1+ 'cases'.

  3. The assignment problem revisited

    1 Introduction The assignment problem can be stated as follows. We have a bipartite graph \ (G= (U\cup V, E)\), with \ (|U|=|V|\), and an integer weight function over the edges of G given by \ (w:E\rightarrow {\mathbb {Z}}\). The objective is to find a perfect matching of G of minimum weight.

  4. PDF A Distributed Algorithm for the Assignment Problem

    A Distributed Algorithm for the Assignment Problem Dimitri P. Bertsekas March 1979y Abstract This paper describes a new algorithm for solving the classical assign-ment problem. The algorithm is of a primal-dual nature and in some ways resembles the Hungarian and subgradient methods, but is substantially di erent in other respects.

  5. A variable neighbourhood search enhanced estimation of distribution

    A variable neighbourhood search enhanced estimation of distribution algorithm for quadratic assignment problems Theoretical Article Published: 28 August 2020 Volume 58 , pages 203-233, ( 2021 ) Cite this article Download PDF OPSEARCH Aims and scope Submit manuscript T. G. Pradeepmon, Vinay V. Panicker & R. Sridharan 316 Accesses Explore all metrics

  6. An introduction and survey of estimation of distribution algorithms

    1. Introduction Estimation of distribution algorithms [1], [2], [3], [4]are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions.

  7. An estimation of distribution algorithm with branch-and ...

    To solve the problem, we present an effective hybrid algorithm fusing the estimation of distribution algorithm and branch-and-bound (B&B) based knowledge. A problem-specific probability model is designed to describe the probabilities of each task being assigned to different workstations.

  8. A quantized consensus algorithm for distributed task assignment

    Abstract: This paper proposes a novel distributed algorithm for a multi-agent assignment problem, in which a group of agents has to reach a consensus on an optimal distribution of tasks among themselves. Distributing a number of tasks to a number of agents is one of the most fundamental resource allocation problems that appear in numerous control and decision systems, ranging from multi-agent ...

  9. A distributed auction algorithm for the assignment problem

    Our algorithm is an extension to the parallel auction algorithm proposed by Bertsekas et al to the case where only local information is available and it is shown to always converge to an assignment that maximizes the total assignment benefit within a linear approximation of the optimal one.

  10. A Deadlock-Free Hybrid Estimation of Distribution Algorithm for

    This article addresses the cooperative multiunmanned aerial vehicles task assignment problem (CMTAP) ... Also, a deadlock-free hybrid estimation of distribution algorithm (DHEDA) is developed for CMTAP by embedding PDAM into the original EDA. To further improve the solution quality, we establish a local exploitation method, and an adaptive ...

  11. Chapter 13: Last Step of Four Step Modeling (Trip Assignment Models

    Static user-equilibrium assignment algorithm is an iterative traffic assignment process which assumes that travelers chooses the travel path with minimum travel time subject to constraints. Iterative feedback loop is a model that iterates between trip distribution and route choice step based on the rational that if a path gets too congested ...

  12. A Distributed Algorithm for the Assignment Problem

    A new algorithm for solving the classical assignment problem that is well suited for distributed operation whereby each node participates in the computation on the basis of limited local information about the topology of the network and the data of the problem. Expand web.mit.edu Save to Library Create Alert Cite Figures from this paper figure 1

  13. PDF Aariable neighbourhood search enhanced estimation of˜distribution

    of˜distribution algorithm for˜quadratic assignment problems T. G. Pradeepmon1,2 · Vinay V. Panicker 1 · R. Sridharan1 Accepted: 13 August 2020 / Published online: 28 August 2020 ... distribution Generate new solutions by sampling the estimated distribution Termination criteria met? End NO YES

  14. A distributed auction algorithm for the assignment problem

    The auction algorithm is also famous to solve assignment problems with polynomial complexity [19], [20], and Bertsekas [21] reports that the auction algorithm outperforms the Hungarian algorithm ...

  15. Introduction to Assignment Methods in Tracking Systems

    Target or detection distribution — If targets are sparsely distributed, associating a target to its corresponding detection is relatively easy. However, if targets or detections are densely distributed, assignments become ambiguous because assigning a target to a detection or another nearby detection rarely makes any differences on the cost ...

  16. Components of Load Distributing Algorithm

    A load distributing algorithm has 4 components - Transfer Policy - Determine whether or not a node is in a suitable state for a task transfer. Process Selection Policy - Determines the task to be transferred. Site Location Policy - Determines the node to which a task should be transferred to when it is selected for transfer. Information Policy -

  17. A novel framework for storage assignment optimization inspired by

    The proposed new algorithm about stress distribution analogy, and the help of the Finite Element Method and minimum total potential energy theory, proposes a new model for storage assignment optimization. The efficiency of the proposed algorithm in terms of calculation time and the best answer investigated through numerical examples.

  18. A dynamic slot assignment algorithm of TDMA for the distribution class

    A dynamic slot assignment algorithm of TDMA for the distribution class protocol using node neighborhood information Abstract: In this paper, we focus on the dynamic time slot allocation algorithm based on Time Division Multiple Access (TDMA) in distribution class protocol.

  19. A Sequential Task Addition Distributed Assignment Algorithm for Multi

    In this paper, we present a novel distributed task-allocation algorithm, namely the Sequential Task Addition Distributed Assignment Algorithm (STADAA), for autonomous multi-robot systems. The proposed STADAA can implemented in applications such as search and rescue, mobile-target tracking, and Intelligence, Surveillance, and Reconnaissance (ISR) missions. The proposed STADAA is developed by ...

  20. Distributed Grouping Cooperative Dynamic Task Assignment Method of UAV

    The dynamic task assignment model for the UAV swarm with the topology of distributed subsets is established considering multiple constraints such as task cooperation, performing sequence, dynamic environment, communication topology, payload model, and UAV capability.

  21. Optimizing storage assignment, order picking, and their ...

    Our evaluation results show that our proposed algorithms return better storage assignments and order pick routes compared to commonly used techniques for the following quality indicators for comparing Pareto fronts: Coverage, Generational Distance, Euclidian Distance, Pareto Front Size, and Inverted Generational Distance.

  22. Distribution-Auction-Algorithm-based-UAV-Swarm-Task-Assignment

    {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":"pic","path":"pic","contentType":"directory"},{"name":"src","path":"src","contentType ...

  23. Group-Based Distributed Auction Algorithms for Multi-Robot Task Assignment

    The problem is shown to be NP-hard, and we design two group-based distributed auction algorithms to solve this task assignment problem. Guided by the auction algorithms, robots first distributively calculate feasible package groups that they can serve, and then communicate to find an assignment of package groups. ...