Optimizing large scale bin packing problem with hybrid harmony search algorithm

Bin packing problem (BPP) is a combinatorial optimization problem with a wide range of applications in fields such as financial budgeting, load balancing, project management, supply chain management. Harmony search algorithm (HSA) is widely used for various real-world and engineering problems due to its simplicity and efficient problem solving capability. Literature shows that basic HSA needs improvement in search capability as the performance of the algorithm degrades with increase in the problem complexity. This paper presents HSA with improved exploration and exploitation capability coupled with local iterative search based on random swap operator for solving BPP. The study uses the despotism based approach presented by Yadav et al. (2012) [Yadav P., Kumar R., Panda S.K., Chang, C. S. (2012). An intelligent tuned harmony search algorithm for optimisation. Information Sciences, 196 , 47-72.] to divide Harmony memory (HM) into two categories which helps to maintain balance between exploration and exploitation. Secondly, local iterative search explores multiple neighborhoods by exponentially swapping components of solution vectors. A problem specific HM representation, HM re-initialization strategy and two adaptive PAR strategies are tested. The performance of proposed HSA is evaluated on 180 benchmark instances which consists of 100, 200 and 500 objects. Evaluation metrics such as best, mean, success rate, acceleration rate and improvement measures are used to compare HSA variations. The performance of the HSA with iterative local search outperforms other two variations of HSA.


Introduction
The Bin Packing Problem (BPP) is an NP-hard problem and it is considered as one of the most important optimization problems in computer science. The objective of BPP is to allocate every item to a bin in such a way that the combined weight of all the items in a bin is not more than bin's capacity and the minimum number of bins are used. There are numerous applications of bin packing in the real world. Industries that are involved in cutting wood, glass and paper (Hopper & Turton, 1999). Perboli et al. (2014) showed the application of bin packing problem in packing and routing. Song et al. (2013) used bin packing problem to solve resource provisioning in the cloud. Coffman et al. (1978) solved multiprocessor scheduling using bin packing. De Cauwer et al. (2016) applied bin packing for managing workloads in data centers. Leinberger et al. (1999) provided packing algorithms for multi-resource allocation and scheduling solutions. Paquay et al. (2016) solved multiple bin size bin packing problem from air transportation. The BPP problem formulation is well known and presented in many papers (Fleszar & Hindi, 2002;Alvim et al., 2004). The mathematical problem formulation of 1D BPP can be modelled in the following manner. Given a set of x items = , , . . . , and y bins = , , . . . , with capacity ∈ ∀ the objective is to find an optimal assignment that minimizes the number of bins to be used. For every the assignment should be in such a way that the sum of weights of items in should not exceed the capacity . In mathematical terms it can be written as (1) subject to .
≤ ∀ = 1 to , 0 ℎ = 1 , 0 ℎ Various deterministic approaches were proposed for solving the bin packing problem. Coffman et al. (1980) analyzed Next-Fit Decreasing-Height (NFDH) and First-Fit Decreasing-Height (FFDH) algorithms. A Hybrid First Fit (HFF) algorithm presented in (Chung et al., 1982). Simchi-Levi (1994) showed that Best-Fit (BF) and First-Fit (FF) algorithms have a worstcase ratio of 7/4. Optimization problems are NP in nature which means that there are no polynomial-time algorithms and therefore these algorithms often do not provide optimal solutions for large instances. Computational time for exact methods to solve optimization problems increases exponentially that leads to issues in practical applications (Stutzle, 1999;Gary & Johnson, 1979). In the last four-decades multiple papers showed the benefits of approximate methods which provide suboptimal solutions in finite time (Stutzle, 1999). In literature, several meta-heuristic algorithms were investigated for solving the bin packing problem. An evolutionary randomized heuristic algorithm was presented for solving 2D bin packing (Blum & Schmid, 2013). Wong & Lee (2009) proposed an improved Lowest Gap Fill heuristic placement routine for solving the 2D bin packing problem. Various hybrid heuristic algorithms for solving bin packing problems can be found in (Frenk & Galambos, 1987;Monaci & Toth, 2006;Baker et al., 1980). Hybrid tabu search approaches were presented for solving twodimensional bin packing problems (Lodi et al., 1999a;Lodi et al., 1999b;Lodi et al., 1999c). Parreño et al. (2010) proposed GRASP/VND algorithm for solving container loading problem which is a three-dimensional bin-packing problem and results proved to be better than existing algorithms. The HSA is an optimization algorithm based on population heuristic in which the improvisation strategy is similar to that of music players. The HSA has been utilized in several research problems. Hasanipanah et al. (2020) proposed novel hybrid artificial neural network with adaptive HSA for approximating blast-induced flyrock. Chen et al. (2012) presented dynamic harmony search to minimize makespan for identical parallel machines scheduling problem. Manjarres et al., (2013) shown several applications of HSA. In order to solve BPP, a variant of harmony search algorithm which uses adaptive PAR strategy with improved intensification-diversification and random operator is proposed. A problem specific HM representation, HM re-initialization strategy is also proposed. Two different PAR strategies are also tested. Finally, the performance of this proposed algorithm is tested on 180 benchmark instances and compared with two variants of HSA. These benchmark instances consist of 100, 200 and 500 objects of varying sizes. Various evaluation metrics are being used to measure the effectiveness of the proposed algorithm.
The remainder of this paper is structured as follows: Section 2 introduced basic HSA and related work. In section 3, proposed HSA is discussed. Section 4 provides details on the dataset used and compares the results obtained during experiments using different evaluation metrics. Finally, this paper is concluded in section 5.

Harmony search algorithm and related work
The HSA first presented by Geem et al. (2001) used the improvisation strategy used by music players. Musician's goal is to find a perfect harmony which is analogous to finding a global solution determined by objective function in optimization problems. HSA is an evolutionary stochastic global optimization method which can be related to particle swarm optimization, genetic algorithms etc. In the process of improvising music, notes represent components of the harmony vector which is a solution vector S of size N representing all the notes. If all the notes contribute to a good harmony then that solution vector is kept in the memory as shown in Fig. 1.

Fig. 1. Harmony vector representation with musical notes
Good harmonies are kept in the memory whereas worst harmonies are replaced by new solutions that are better than previous solutions. This is how improvisation occurs. These harmonies are similar to the fitness function of an optimization problem. Fig. 2 shows the optimization procedure for basic harmony search algorithm. There are total 5 steps involved.

Fig. 2. Basic harmony search algorithm procedure
Step 1: This is the primary step of basic HSA where initialization of various harmony search parameters like harmony memory consideration rate (HMCR), pitch adjustment rate (PAR), max iterations i.e. the termination criteria and harmony memory size (HMS) is done.
Step 2: In this step harmony memory i.e. solution vector (S1, S2, …, SN) are initialized randomly. The Fig. 3 shows the general harmony memory matrix, where N is the HMS.

Fig. 3. Harmony memory matrix
Step 3: Once the harmony memory is filled with solution vectors then it is improvised in this step. Improvisation is based on HMCR, PAR and randomization. HMCR helps to maintain the balance between intensification and diversification which plays an important role in HSA. PAR and randomization maintain diversification in HSA Yang (2009). Based on these three parameters, a new solution vector is generated. A solution from HM is selected by the probability of HMCR and then each member of that solution is adjusted by the probability of PAR. HMCR ∈ [0, 1] and PAR ∈ [0, 1].
Step 4: If the newly generated solution vector in the step 3 is better than the solution vector present in the HM then the HM is updated with the new solution.
Step 5: Repeat step 3 and step 4 until maximum iterations are reached.
As stated earlier HSA has two important parameters: HMCR and PAR. These are probabilistic parameters in which HMCR is concerned with exploration and exploitation rate and PAR is the probability with which the selected value from memory is being adjusted. Fine tuning HSA with proper parameter values is a challenging task. This leads to variations of HSA with adaptive parameter setting (Moh'd Alia & Mandava, 2011). In literature, a large number of papers improved the performance of HSA by fine tuning parameter setting and hybridization with other heuristic algorithms (Wang et al., 2011;Wu et al., 2012;Pandi & Panigrahi, 2011). Various variations of HSA have been proposed for solving optimization problems. An improved harmony search (IHS) was proposed by Mahdavi et al. (2007) which uses linearly increasing adaptive PAR. Omran & Mahdavi (2008) addressed the drawbacks of IHS and proposed global-best harmony search (GHS) which intuitively improvises harmony memory by replacing bandwidth parameters. Wang & Huang (2010) proposed dynamic selection of parameters in which PAR value is decreased linearly. A mutation operator inspired from the Differential Evolution algorithm is replaced by the PAR operator (Chakraborty et al., 2009). Hasançebi et al. (2010) and Saka & Hasançebi (2009) proposed dynamic tuning of HMCR and PAR parameters in the improvisation process. HMCR and PAR values start with zero and then dynamically adapt during the HS process. Geem et al. (2005) and Al-Betar et al. (2010) presented multiple PAR strategies in HS. Simulated annealing strategy is used to update the PAR values (Taherinejad, 2009). To update the PAR values Dispersed PSO component is used (Coelho & Bernert, 2009). El-Abd (2013) presented a novel strategy for adaptive change in PAR and BW values. A binary harmony search algorithm with adaptive parameter setting is applied to solve large-scale 0-1 knapsack problems (Kong et al., 2015). ABHS tunes HMCR and PAR based on the feedback from the evolutionary search (Guo et al., 2018).

Hybrid adaptive harmony search algorithm
This section presents the harmony memory representation, harmony memory initialization strategy. It also discusses reinitialization strategy and the proposed HSA with improved intensification-diversification capability with random operator based iterative local search.

Harmony memory representation
In 1D BPP we have k bins B1, B2, …, Bk with similar or varying capacities and O objects each with different weights.
Objective is to allocate all objects to these bins in such a way that a minimum number of bins are used. A real coded solution representation for solving 1D BPP is shown in (Adamuthe & Nitave, 2020). Fig. 4 shows the solution representation with 4 objects and 2 bins where Object1 and Object3 are being assigned to Bin1 and remaining two objects are being assigned to Bin2. This forms one solution vector where every object is assigned to exactly one bin without violating the constraints and objective is to minimize the number of bins being used.

Harmony memory initialization
For initializing the harmony memory a first-fit strategy with randomization is used. A bin 'B' is assigned to object 'O' such that, where, bin_sizex is the current size of the bin B and obj_sizey is the size of the object O. If this constraint in Eq. (3) fails then a bin is randomly assigned to the object. If the randomly assigned bin B cannot allocate object O without satisfying the above constraint then a penalty equivalent to Eq. (4) is applied. bin_size = − bin_size + obj_size (4)

Harmony search with increasing PAR
In this variant of HSA the value of pitch adjusting rate is increased along with every generation. The value of PAR changes as per equation (5) given in (Mahdavi et al., 2007).
where, PAR pitch adjusting rate for every iteration/generation NI maximum number of iterations PARmin initial and minimum value of pitch adjusting rate Gen iteration/generation number PARmax maximum value of pitch adjusting rate In step 1 i.e. initialization of parameters the value of PARmin and PARmax is set to 0.45 and 1.0 respectively whereas initial value of PAR is set similar to that of PARmin. This variant of harmony search is referred to as version 1 in this paper.

Harmony search with decreasing PAR
In this version of adaptive harmony search the value of PAR is decreased with increase in iteration according to the Eq. (6).
For this variant of harmony search the value of PARmin and PARmax is set to 0.85 and 1.0 respectively. This variant of HSA is reffered to as version 2 in this paper. The Fig. 6 shows the change in values of PAR with the iteration for HSA with version 1 and version 2. As compared to traditional HSA, these two versions use dynamic PAR which enhances the performance of the HSA.

Fig. 6. Change in PAR with iteration
The pitch adjustment rate parameter in the HSA helps to boost the diversification capability as it controls the calibration of the components of the solution vector in HM. The calibration works either by adding or removing the assigned bins from an existing component. Good harmonies are retained while others are fine-tuned.

Re-initialization strategy
As the complexity of the problem increases it gets difficult to find optimal solutions because the algorithms don't converge as the components of solution vector gets saturated and the scope for exploration decreases. So, in order to induce diversity in solution vectors a re-initialization strategy is proposed by Adamuthe & Nitave, (2020). For re-initializing the harmony memory, n solution vectors are selected from HM and re-initialized according the strategy of HM initialization discussed previously. Deciding the value of n is very important because too high value will destroy previously converged solution vectors and too low value will not effectively create diversity. Setting the threshold parameter and updating the threshold while the algorithm is converging is also a crucial task. This threshold can be different for different datasets. Fig. 7 shows the diversity in the harmony memory after the first iteration and Fig. 8 shows the diversity in the harmony memory after 1000 iterations on sample instance with 500 objects and harmony memory size equivalent to 100. As we can see in Fig. 8 the solution vectors get converged and look identical to one another and thus fail to converge further. The algorithm 1 shows the pseudocode for re-initialization strategy where n is the number of solution vectors from HM that should be selected for reinitialization. As described earlier the value of 'n' is important for re-initialization strategy. There is one more parameter namely threshold which decides how often the re-initialization strategy is executed. If the threshold is too low then re-initialization module is not executed frequently which results in delayed convergence and if it is too high then all the solutions in the HM will be destroyed before converging. The value of threshold highly depends on size and complexity of the optimization problem. The value of threshold is nothing but the iteration value that will execute re-initialization module. In random consideration, the new vector's components are generated at random mode, has the same level of efficiency as in other algorithms that handle randomization, where this property allows HS to explore new regions that may not have been visited in the search space. if itr > threshold 20.
Re-initialize the harmony memory to introduce randomization 21.
update threshold 22. end 23. end However this variant of harmony search is not able to find optimal solutions as the dimension of the optimization problem increases. In order to solve 1D-BPP with 500 objects another variant of HSA is proposed and discussed in the next section.

Harmony search with iterative local search
As the number of objects to be packed increases the minimum number of required bins also increases. This makes the search space bigger and therefore HSA in version 1 and version 2 fail to converge. In order to solve the 1D bin packing problem with a large number of objects, an harmony search algorithm with iterative local search is proposed which works on the concept of despotism which means that the decision is executed by the dominant (best) solution vector so we use the objective value represented by Eq. (7).

=
, 1: where best represents the index of the best solution vector in the harmony memory. This strategy divides all the solution vectors into two categories such that solutions in first category have values less than HMmean and the solutions in the second category have values more than that of HMmean where HMmean is the mean of the entire harmony memory. First category is responsible for intensification and diversification whereas latter is responsible for diversification. In the Eq. (8) and Eq. (9) yi best and yi worst are the i th element of the best and worst solution vector in the harmony memory. Eq. (8) allows the algorithm to search between selected element yi and the yi best search space and it is responsible for intensification. Equation (9) is responsible for diversification where selected element yi is close to yi worst which makes (yi worst -yi) smaller so we add yi best which moves it closer to the yi best . On the other hand if the selected element yi is far from yi worst then the value moves away from yi best . The solution vectors belonging to the second category which are responsible only for diversification the pitch adjustment strategy is given by Eq. (10) where m is used to store the index of solution vector from harmony memory (Yadav et al., 2012).
The intensification and diversification strategy proposed in equations 8, 9 and 10 improves the convergence of the HSA and helps in finding the region of best solution but fails to provide the best solution. In order to further solve the problem an iterative local search is proposed which is based on random swap operation. This hybridization is relatively simple yet very effective strategy for solving BPP. The local iterative search enhances the performance by exploring multiple neighborhoods in the region of best solution. In this strategy, N swaps are performed on components of a selected solution vector. The value of N is adaptive and changes with the objective values of solution vectors in HM. The Fig. 9 shows that as the proposed algorithm approaches the best solution the number of swaps performed increases exponentially. This exponential increase in the number of swaps helps the algorithms to find the solution. The Fig. 10 shows the number of swaps performed for six different datasets with 500 objects. The algorithm 3 shows the detailed pseudocode for the iterative local search strategy. The variable row_index represents the solution vector which needs to be selected from the HM. After this the value of N is calculated, two objects and are selected randomly and then the swap is performed betweeen the bins of two selected objects. The newly generated solution is updated in the HM if it is better than previous solution. Exponential swaps are performed to find the optimal solutions. The algorithm 4 shows the detailed pseudocode for the proposed HSA.

Results and performance evaluation
This section gives a detailed explanation about the dataset used for experiments and the results obtained. The proposed algorithm is implemented in C and tested on a computer with following specifications: Ubuntu 18.04 LTS, Intel core i5-8265U CPU 1.6 GHz with 8 GB RAM. Every experiment was executed for 10 times. A set of benchmark datasets are used for evaluating the proposed algorithm. The dataset can be found at https://www2.wiwi.unijena.de/Entscheidung/binpp/index.htm. These datasets are prepared as per Martello and Toth (1990) and best known solutions are already provided. The naming conventions of the dataset are as follows.

), z=4 (Wj from [30, 100]) i=A to T with 20 instances of each class
In all there were 180 instances which consist of x =3 and 4, y=1, z=1,2,4. Every instance consists of a number of objects, bin capacity and size of each object separated by a new line. Table 1 shows the structure of input data. The optimal solutions for these datasets can be found in (Flezar and Hindi, 2002;Schoenfield, 2002;Alvim et al., 2004;Schwerin and Wäscher, 1997). To evaluate the performance of all the variants of harmony search, this paper uses mean, standard deviation and best where best is the most converged solution in the harmony memory. Suganthan et al. (2005) presented six metrics for evaluating the performance of the algorithms. These six parameters are namely convergence graph, average number of function evaluations (NFEs), success rate (SR), acceleration rate (AR) and improvement. Parameters are calculated using the following equations.
= times times (11)    The success rate of three versions of harmony search algorithm is shown in Table 4. For experimentation 180 instances are used. Versions V1, V2, V3 works well for instances with 100 objects. Results show that the success rate of V1 and V2 decreases for instances with 500 objects. Version 3 has a success rate of 100% for more than 80% of the instances whereas V2 has a success rate of approximately 56% and V1 has a success rate of approximately 6%.  Fig. 11, 12 and 13 show the convergence of the algorithm for version 1, version 2 and version 3 respectively. It is evident that the proposed algorithm converges faster as compared to the other two algorithms. From the results it can be concluded that the proposed version V3 requires less number of iterations as compared to the other two variants to get optimal values. On the other hand there is not much difference in the performance of V1 and V2. Fig. 14, 15 and 16 shows the box plot for three harmony search algorithm variants on three instances with 500 objects. The box plots show that the objective values of the proposed algorithm variation V3 lies in the range of approximately 0 to -50 whereas the range for the other two variants is approximately from 0 to -175. From this it can be concluded that solution vectors in harmony memory of the proposed algorithm are more converged. Fig. 11. Convergence of Version 1 Fig. 12. Convergence of Version 2 Table 5 and 6 shows the function evaluations, acceleration rate and improvements of proposed harmony search algorithm w.r.t. other two versions. Sample results on 16 instances out which 6 instances with 500 objects and 10 instances with 200 objects is presented. Results show that the acceleration rate of proposed variation V3 is better than other two variants for instances with 500 objects. Acceleration rate of V3 and V2 is approximately similar for instances with 200 objects.    The effect of harmony memory size on performance of the HS variations is tested with changing the size from 30 to 100. Fig.  17, 18 and 19 shows the results of HS variations with changing harmony memory size. Results show that all three versions of harmony search with problem specific representation, adaptive PAR, re-initialization strategy performs well with small harmony memory size.

Conclusions
Bin packing problem is a NP-hard combinatorial optimization problem and has several applications in the real world. Due to its large search space deterministic algorithms fail to provide optimal solutions. This paper presents a variant of HSA which is a meta-heuristic algorithm for solving 1D bin packing problem. HSA is also a widely used algorithm for solving various optimization problems. HSA has two parameters namely HMCR and PAR which controls the exploration and exploitation capability. In basic HSA these parameters are static and therefore fail to give optimal solutions for problems with higher complexity. In this paper a variant of HSA is proposed which uses adaptive PAR strategy and iterative local search which enhances the performance of the algorithm. Problem specific harmony memory representation, initialization and improvisation improved the results of BPP. The performance of the proposed HSA is evaluated on 180 benchmark datasets which consists of 100, 200 and 500 objects to be packed. In order to compare the performance of proposed HSA with other variants, metrics such as best, mean, standard deviation, success rate, acceleration rate are used. Here best is the most converged solution present in the harmony memory matrix. Version 3 is the proposed method, version 2 uses decreasing PAR strategy and version1 uses increasing PAR strategy Mahdavi et al. (2007). Performance of version 3 (proposed method) outperforms version 2 and version 1 in all the aspects. Also, the box plot presentation shows that values of the solution vector in harmony memory for version 3 are far more converged than other two variants. An experiment was conducted to show the effect of harmony memory size on the performance of the algorithm.
In future there is a scope to investigate different strategies for controlling exploration and exploitation capabilities by tuning PAR and using hybridization approaches to obtain optimal solutions.