Coordinate descent heuristics for the irregular strip packing problem of rasterized shapes

We consider the irregular strip packing problem of rasterized shapes, where a given set of pieces of irregular shapes represented in pixels should be placed into a rectangular container without overlap. The rasterized shapes provide simple procedures of the intersection test without any exceptional handling due to geometric issues, while they often require much memory and computational effort in high-resolution. To reduce the complexity of rasterized shapes, we propose a pair of scanlines representation called the double scanline representation that merges consecutive pixels in each row and column into strips with unit width, respectively. Based on this, we develop coordinate descent heuristics for the raster model that repeat a line search in the horizontal and vertical directions alternately, where we also introduce a corner detection technique used in computer vision to reduce the search space. Computational results for test instances show that the proposed algorithm obtains sufficiently dense layouts of rasterized shapes in high-resolution within a reasonable computation time.


Introduction
The irregular strip packing problem (ISP), or often called the nesting problem, is the one of the representative cutting and packing problems that emerges in a wide variety of industrial applications, such as garment manufacturing, sheet metal cutting, furniture making and shoe manufacturing (Alvarez-Valdes et al., 2018;Scheithauer, 2018). This problem is categorized as the two-dimensional, irregular open dimensional problem in Dyckhoff (1990); Wäscher et al. (2007). Given a set of pieces of irregular shapes and a rectangular container with a fixed width and a variable length, this problem asks a feasible layout of the pieces into the container such that no pair of pieces overlaps with each other and the container length is minimized. We note that rotations of pieces are usually  restricted to a few number of degrees (e.g., 0 or 180 degrees) in many industrial applications, because textiles have grain and may have a drawing pattern. Figure 1 shows an instance of ISP and a feasible solution.
The first issue encountered when handling ISP is how to represent the irregular shapes. In computer graphics, the irregular shapes are often represented in two models as shown in Figure 2: the vector model represents an irregular shape as a set of chained line and curve segments forming its outline, and the raster model (also known as the bitmap model ) represents an irregular shape as a set of grid pixels forming its inside. The vector model requires complicated trigonometric computations with many exception handling for the intersection test of the irregular shapes, while the raster model provides simple computations without any exception handling. On the other hand, the vector model often consumes less memory usage and computation time for the intersection test than the raster model, because the number of line and curve segments in the vector model often becomes much smaller than that of grid pixels in the raster model. For the vector model, in particular polygons, the recent development of computational geometry such as the no-fit polygon (NFP) enables us to compute their intersection test efficiently (Bennell & Oliveira, 2008;Leao et al., 2020). Based on the efficient geometric computations, many efficient heuristic algo-rithms have been developed for ISP of the polygons (called the polygon packing problem in this paper) (Bennell & Oliveira, 2009;Hu et al., 2018b). A standard approach for the polygon packing problem is to develop construction algorithms, e.g., the bottom-left (BL) algorithm and the bottom-left fill (BLF) algorithm that places the pieces one by one into the container based on a given order (Albano & Sapuppo, 1980;B lȧzewicz et al., 1993;Oliveira et al., 2000;Dowsland et al., 2002;Gomes & Oliveira, 2002;Bennell & Song, 2010). Another approach is to resort improvement algorithms that relocate pieces by solving the compaction problem and/or the separation problem (Li & Milenkovic, 1995;Bennell & Dowsland, 2001;Gomes & Oliveira, 2006). The compaction problem relocates pieces from a given feasible placement so as to minimize the container length. The separation problem relocates pieces from a given infeasible placement so as to make it feasible while minimizing the total amount of their translation. The overlap minimization problem (OMP) is a variant of the separation problem that places pieces within the container with given width and length so as to minimize the overlap penalty for all pairs of pieces (Egeblad et al., 2007;Imamichi et al., 2009;Umetani et al., 2009;Leung et al., 2012;Elkeran, 2013). Mundim et al. (2017) and Sato et al. (2019) constrained the search space to place the pieces on a discrete set of positions (i.e., a grid), and developed a discretized NFP called the no-fit raster for the intersection test of the irregular shapes. This constrained model of the search space on a grid is called the dotted-board model (Toledo et al., 2013), which is similar to the raster model but different because the given pieces are represented in polygons (i.e., the vector model).
For the raster model, Oliveira & Ferreira (1993), Segenreich & Braga (1986) and Babu & Babu (2001) represented the pieces by matrices of different codes (Bennell & Oliveira, 2008). The raster representations are simple to code the irregular shapes and provide simple procedures for their intersection test, which enable us to develop practical software for a wide variety of free-form packing problems including many curve segments and holes only by converting input data to pixels. However, their computational costs highly depend on the resolution of the raster representations; i.e., improving the resolution much increases the memory usage and computation time for the procedures. To improve the computational efficiency of the intersection test and the overlap minimization for the raster model, Okano (2002) and Hu et al. (2018a) proposed sophisticated data structures that merge the pixels of the irregular shapes into strips or rectangles. Using these representations, several heuristic algorithms have been developed for ISP of the raster model (Segenreich & Braga, 1986;Oliveira & Ferreira, 1993;Jain & Gea, 1998;Babu & Babu, 2001;Chen et al., 2004;Wong et al., 2009). Despite these heuristic algorithms for the raster model, their computational results were still restricted for the instances of low-resolution and insufficient to convince their high performance for the instances of high-resolution.
In this paper, we develop an efficient line search algorithm for coordinate descent heuristics in the raster model, which makes it possible to high performance improvement algorithms for the instances of high-resolution. The coor-dinate descent heuristics (CDH) is one of improvement algorithms that repeat a line search in the horizontal and vertical directions alternately, which have achieved high performance for ISP of the vector model (Egeblad et al., 2007;Umetani et al., 2009). However, unlike the vector model, the conventional raster representations were computationally much expensive to develop any efficient line search algorithms especially in high-resolution. To reduce the complexity of rasterized shapes, we propose a pair of scanlines representation called the double scanline representation for NFPs of the rasterized shapes that reduces their complexity by merging consecutive pixels in each row and column into strips with unit width, respectively. We also introduce a corner detection technique used in computer vision (Rosten & Drummond, 2006) to reduce the search space of the line search. The coordinate descent heuristics are incorporated into a variant of the guided local search (GLS) for OMP based on Umetani et al. (2009). Using this as a main component, we then develop a heuristic algorithm for ISP, which we call the guided coordinate descent heuristics (GCDH).
This paper is organized as follows. We first formulate ISP and OMP in the raster model and illustrate the outline of the proposed algorithm GCDH for ISP in Section 2. We then introduce an efficient intersection test for rasterized shapes in Section 3. We explain the main component of the proposed algorithm CDH for OMP in Section 4 and the construction algorithm for an initial solution of ISP in Section 5. Finally, we report computational results in Section 6 and make concluding remarks in Section 7.

Irregular strip packing problem of rasterized shapes
We are given a list of n pieces P = {P 1 , P 2 , . . . , P n } of rasterized shape with a list of their possible orientations O = (O 1 , O 2 , . . . , O n ), where a piece P i can be rotated by o degrees for each o ∈ O i . We assume without loss of generality that zero degree is always included in O i . We are also given a rectangular container C = C(W, L) with a width W and a length L, where W is a non-negative constant and L is a non-negative variable. We assume that the container edges with the width W and the length L are parallel to the y-axis and the x-axis as shown in Figure 1, respectively, and the bottom-left corner of the container C is the origin (0, 0). We denote P i (o) as a piece P i rotated by o ∈ O i degrees, which may be written as P i for simplicity when its orientation is not specified or clear from the context. We consider the bounding-box of a piece P i (o) as the smallest rectangle that encloses P i (o), and its width and length are denoted by w i (o) and l i (o), respectively. We describe a position of a piece P i (o) by a coordinate v i = (x i , y i ) of the center of its bounding box called the reference point. For convenience, we regard a piece P i as a set of grid pixels when its reference point is put at the origin (0, 0) and a rotated piece P i (o) is sometimes reshaped to fit grid pixels. We then describe a piece P i placed at v i by the Minkowski sum We describe a solution of ISP by lists of positions v = (v 1 , v 2 , . . . , v n ) and orientations o = (o 1 , o 2 , . . . , o n ) of all pieces P i (i = 1, . . . , n). We note that a solution (v, o) uniquely determines a layout of the pieces. The ISP is described as follows: where Z + is the set of nonnegative integer values. We note that the coor- ) when it is placed at the bottom-left and top-right corners of the container C, respectively. We also note that minimization of the length L is equivalent to maximization of the density defined by 1≤i≤n (area of P i )/W L.

Overlap minimization problem
We consider OMP as a sub-problem of ISP to find a feasible layout (v, o) of given pieces for the container C with a given length L. A solution of OMP may have a number of overlapping pieces, and the total amount of overlap is penalized in such a way that a solution with no penalty gives a feasible layout for ISP. Let f ij (v i , v j , o i , o j ) be a function that measures the overlap amount for a pair of pieces P i (o i ) ⊕ v i and P j (o j ) ⊕ v j . The objective of OMP is to find a solution (v, o) that minimizes the total amount of the overlap penalty under the constraint that all pieces P i (1 ≤ i ≤ n) are placed within the container C(W, L).
We introduce the directional penetration depth to define the overlap penalty which was originally defined as the minimum translational distance in a given direction to separate a pair of polygons (Dobkin et al., 1993). If they do not overlap, then their directional penetration depth is zero. In the raster model, we define the directional penetration depth in the horizontal and vertical directions as follows: We then define the overlap penalty f ij (v i , v j , o i , o j ) for a pair of pieces P i (o i )⊕v i and P j (o j ) ⊕ v j as the minimum within the horizontal and vertical penetration depths

Entire algorithm for the irregular strip packing problem
We give an entire description of the proposed algorithm GCDH for ISP. The GCDH first generates an initial solution (v, o) by a construction algorithm to be explained in Section 5, and sets the minimum container length L containing all pieces P i (1 ≤ i ≤ n). It then searches the minimum feasible container length L * by shrinking or extending the right sides of the container C until the time limit is reached, where the ratios of shrinking and extending container length are controlled by the parameters r dec and r inc , respectively. If the current solution (v, o) is feasible, then it shrinks the container length L to ⌊(1 − r dec )L⌋ and relocates protruding pieces P i at random positions in the container C(W, L); otherwise it extends the container length L to ⌊(1 + r inc )L⌋. If there is at least one overlapping pair of pieces in the current solution, then it tries to resolve overlap by CDH for OMP explained in Section 4. Figure 3 shows the outline of the proposed algorithm GCDH. The algorithm is formally described in Algorithm 1, where we omit the input data W, P, O commonly used in all algorithms in this paper.

No-fit polygon for intersection test
The no-fit polygon (NFP) introduced by Art (1966) is a representative geometric technique used in many algorithms for the polygon packing problem. It is also used for other applications such as image processing and robot motion Relocate protruding pieces Pi(oi) at random position in C(W, L). if L ≥ L * then 13:

14:
Relocate protruding pieces Pi(oi) at random position in C(W, L). planning, and is also known as the Minkowski difference and the configurationspace obstacle, respectively. The no-fit polygon NFP(P i , P j ) of an ordered pair of pieces P i and P j is defined by The NFP has an important property that P j ⊕v j overlaps with P i ⊕v i if and only if v j − v i ∈ NFP(P i , P j ) holds. That is, the intersection test for two irregular shapes P i ⊕ v i and P j ⊕ v j can be identified by testing whether a point v j − v i is inside an irregular shape NFP(P i , P j ) or not. Figure 4 shows an example of NFP(P i , P j ) for two irregular shapes P i and P j , where the solid arrow illustrates the minimum translation of the piece P j to separate from the piece P i within the horizontal and vertical directions. The NFP enables us to compute the intersection test and the directed penetration depth efficiently, assuming that we compute NFPs for all pairs of pieces in advance.

Scanline representation for rasterized shapes
We consider a pair of scanlines representation called the double scanline representation that reduces the complexity of rasterized shapes by merging consecutive pixels in each row and column into strips with unit width, respectively. Figure 5 shows an example of the double scanline representation of a rasterized shape. Hu et al. (2018a) developed an efficient algorithm to compute NFPs of a single (i.e., horizontal or vertical) scanline representation called the Integrate-NFP (See details of its implementation in Hu et al. (2018a)).
Based on this, we compute the horizontal penetration depth δ(P i ⊕ v i , P j ⊕ v j , (1, 0)) efficiently. Figure 6 shows an example of computing the horizontal (and vertical) penetration depth from NFP of the scanline representation.  Let S ij = {S ij1 , S ij2 , . . . , S ijmij } be the set of horizontal strips representing NFP(P i , P j ), where m ij is the number of strips representing NFP(P i , P j ). When the relative position v j − v i of the piece P j is placed in a strip S ijk , the horizontal penetration depth δ(P i ⊕ v i , P j ⊕ v j , (1, 0)) then takes the minimum length from v j − v i to left and right side of the strip S ijk . We note that the horizontal penetration depth can be computed in O(1) time when NFP(P i , P j ) is y-monotone, where a rasterized shape is y-monotone if it can be represented by a set of strips such that there is exactly one strip for each row and strips in adjacent rows are contiguous. We also compute the vertical penetration depth δ(P i ⊕ v i , P j ⊕ v j , (0, 1)) efficiently utilizing the vertical scanline representation of NFP(P i , P j ) in the same fashion. In Figure 6, the solid arrows illustrate the minimum translation of the piece P j to separate from the piece P i in the horizontal and vertical directions, and their lengths are the horizontal and vertical penetration depths, respectively.

Outline of the coordinate descent heuristics
We develop an improvement algorithm for OMP called the coordinate descent heuristics (CDH), which start from an initial solution and repeatedly apply the line search that minimizes the objective function along a coordinate direction until no better solution is found in any coordinate directions.
We first explain the neighborhood of the CDH for OMP.
and iteratively applying the line search to find a new position v ′ k . The quality of a solution (v, o) is measured by the following weighted overlap penalty function where α ij > 0 are the penalty weights and f ij are the overlap penalties defined in (5) for a pair of pieces P i (o i )⊕v i and P j (o j )⊕v j . The penalty weights α ij are adaptively controlled by GLS to be explained in Section 4.3. For a piece P k (o ′ k ), the neighborhood search finds a new position v ′ k in the container C(W, L) such that the following weighted overlap penalty function is minimized. For this, the neighborhood search repeatedly moves the piece P k (o ′ k ) in the horizontal and vertical directions alternately until no better position is found in either direction. Figure 7 shows how the neighborhood search proceeds. For each move of the piece be the set of valid t such that the piece P k (o ′ k ) placed on the line v k + td (t ∈ Z) is contained in the container C(W, L), where Z is the set of integer values. The line search (to be explained in Section 4.2) finds a new valid position v ′ k = v k +td (t ∈ N ) that minimizes the weighted overlap penalty function F k (v k + td, o ′ k ). The neighborhood search is formally described in Algorithm 2.
We now describe the outline of CDH for OMP. Starting from an initial solution (v, o) with some overlapping pieces, CDH repeatedly replaces the current as a locally optimal solution and the best solution (v * , o * ) obtained so far, measured by the original penalty function F , and halts.
We also incorporate the fast local search (FLS) strategy (Voudouris & Tsang, 1999) to improve the efficiency of CDH. The strategy decomposes the neighborhood into a number of subneighborhoods, which are labeled active or inactive depending on whether they are being searched or not, i.e., it skips evaluating all neighbor solutions in inactive subneighborhoods. We define the subneighborhood NB k (v, o) (1 ≤ k ≤ n) of the current solution (v, o) as the set of solutions obtainable by setting orientation o ′ k ∈ O k and applying the neighborhood search to the piece P k , i.e., the neighborhood NB(v, o) is partitioned with respect to the pieces P k (1 ≤ k ≤ n). The CDH first sets all subneighborhoods NB k (v, o) (1 ≤ k ≤ n) to be active, and it searches active subneighborhoods NB k (v, o) in random order. If no improvement has been made in an active subneighbor- then it activates all subneighborhood NB j (v, o) corresponding to the pieces P j overlapping with the piece P k before and after its move. The CDH is formally described in Algorithm 3, where A denotes the set of indices k (1 ≤ k ≤ n) corresponding to the active subneighborhood NB k (v, o), and (v * , o * ) and (ṽ,õ) denote the best solutions of the original overlap penalty function F and the weighted overlap penalty function F , respectively.
for Pj overlapping with P k before and after the move do 15:

Efficient implementation of the line search
We develop an efficient line search algorithm for the neighborhood search, which finds a new position when a piece P k (o ′ k ) moves in the horizontal or vertical direction d ∈ {(1, 0), (0, 1)}. Recall that the line search finds the new is contained in the container C(W, L). We consider below the case when P k (o ′ k ) moves in the horizontal direction (i.e., d = (1, 0)). The case of the vertical direction (i.e., d = (0, 1)) is almost the same and is omitted. The weighted overlap penalty function k) by definition (8). Let be the set of positions of the piece P k (o ′ k ) in terms of t ∈ N such that it overlaps with the other piece P j (o j ). If I kj = ∅ holds, then the overlap penalty respectively. Figure 8 shows an example of detecting the overlapping pieces P j (o j ) when the piece P k (o ′ k ) moves in the horizontal direction. Let N + = 1≤j≤n,j =k I kj be the set of t (∈ N ) inducing overlap with other pieces, and N − = N \ N + be its complement. If N − = ∅ holds, then the line search algorithm finds the minimum t ∈ N − (i.e., the left most feasible position); otherwise, it finds the position t ∈ N + that minimizes the overlap penalty function F k (v k + td, o ′ k ). We now consider how to compute the overlap penalty function is decomposed into the horizontal and vertical penetration depths. We denote the horizontal and vertical penetration depths for a given t ∈ N by for d ∈ {(1, 0), (0, 1)}, respectively. Figure 9 shows an example of computing is placed in a strip, the horizontal (resp., vertical) penetration depth then takes the minimum distance |s| from v ′ k − v j to left and right (resp., bottom and top) side of the strip.
The line search algorithm is very time consuming when |N + | becomes larger according to high-resolution of the raster representation as long as it computes the overlap penalty F k (v k + td, o ′ k ) for all t ∈ N + . As shown in Figure 9, the horizontal penetration depth forms a regular stepwise with steady changes, and it possibly takes the minimum value only at the left and right end of the strips on the horizontal line v k + td (t ∈ N ). On the other hand, the vertical penetration depth forms an irregular stepwise function that sometimes changes rapidly; however, it may occur only at the "corners" of NFP(P j (o j ), P k (o ′ k )). We accordingly introduce a corner detection technique to restrict the search space of the line search only having rapid changes of the vertical penetration depth. We first detect the contour of NFP(P j (o j ), P k (o ′ k )) and then apply a fast corner detection algorithm called FAST (Rosten & Drummond, 2006) to the contour as shown in Figure 10. Let NFP(P j (o j ), P k (o ′ k )) be the set of detected corners in NFP(P j (o j ), P k (o ′ k )) obtained by the FAST. We consider the set of positionsĪ kj (⊆ I kj ) of the piece P k (o ′ k ) in terms of t ∈ N that contains (i) the left and right end of the horizontal strips on the horizontal line v k + td (t ∈ N + ), and (ii) the projection of NFP(P j (o j ), P k (o ′ k )) onto the horizontal line v k +td (t ∈ N + ). Let N + = 1≤j≤n,j =kĪ kj (⊆ N + ) be the reduced search space of the line search; i.e., the line search algorithm computes the weighted overlap penalty F k (v k + td, o ′ k ) only for t ∈ N + (instead of N + ) when N − = ∅ holds. In Figure 10 (right), the grey nodes show the detected corners in NFP(P j (o j ), P k (o ′ k )) by FAST, and the white nodes show the positions inĪ kj to be evaluated by the line search algorithm.

Guided local search
It is often the case that CDH alone may not attain a sufficiently good solution. To improve its performance, we incorporate it into one of the representative metaheuristics called the guided local search (GLS) (Voudouris & Tsang, 1999). The GLS repeats CDH while updating the penalty weights of the objective function adaptively to resume the search from the previous locally optimal solution. The GLS starts from an initial solution (v, o) with some overlapping pieces, where the penalty weights α ij are initialized to 1.0. Whenever CDH stops at a locally optimal solution (v, o), GLS updates the penalty weights α ij by the following formula: By updating the penalty weights α ij (1 ≤ i < j ≤ n) repeatedly, the current solution (v, o) becomes no longer locally optimal under the updated weighted overlap penalty function F , and GLS resumes the search from the current solution. The GLS stops these operations if it fails to improve the best solution (v * , o * ) of the original penalty function F after a specified number k max of consecutive calls to CDH. The GLS is formally described in Algorithm 4.

Construction algorithm for the initial layout
We generate an initial solution (v, o) for ISP using the next-fit decreasing height (NFDH) algorithm. The NFDH is a variant of the level algorithms for the rectangle packing problem (Hu et al., 2018b) that places rectangle pieces from bottom to top in columns forming levels as shown in Figure 11. The first level is the left side of the container, and each subsequent level is along the vertical line coinciding with the right of the longest piece placed in the previous level. The NFDH first sorts the pieces P i (1 ≤ i ≤ n) in the descending order of the lengths l i of their bounding-boxes, where they are not rotated (i.e., l i = l i (0)). Let the container length L be sufficiently long, e.g., L > 1≤i≤n l i .
Update the penalty weights αij (1 ≤ i < j ≤ n) of (ṽ,õ) by (14) 9: k ← k + 1 10: Return (v * , o * ) The NFDH places all pieces P i (1 ≤ i ≤ n) one by one into the container C according to the above order. If possible, each piece P i (more precisely, its bounding-box) is placed at the bottom-most feasible position in the current level; otherwise, a new level is created to place the piece P i . We then apply a compaction algorithm (called Compact) based on the bottom-left strategy that makes successive sliding moves to the left and the bottom alternately as long as possible. The compaction algorithm can be implemented based on the neighborhood search (Algorithm 2) by replacing the condition F k with t < 0 (i.e., the piece P k slides to the left or the bottom without overlap). They are formally described in Algorithm 5.

Computational results
The guided coordinate descent heuristics (GCDH) proposed in this paper were implemented with the C programming language and run on a single thread under a Mac PC with a 3.2 GHz Intel Core i7 processor and 64 GB memory. We set the input parameters r dec = 0.02, r inc = 0.005 for GCDH (Algorithm 1) and k max = 200 for GLS (Algorithm 4) according to Umetani et al. (2009). The performance of GCDH was tested on two sets of instances for ISP represented in the vector model. The first set includes 15 instances of the standard vector model without circular arcs nor holes (i.e., the simple polygon packing problem), which are available online at the web site of the working group on cutting and packing (ESICUP, 2003). The second set includes 10 instances of an extended vector model, in which several instances incorporate circular arcs and holes into polygons (Burke et al., 2010). Table 1 summarizes the data of the instances. The second column "#shapes" shows the number of different shapes, the third column "#pieces" shows the total number of pieces, the fourth column "avg.#lines" shows the average number of the line segments in an irregular shape, the fifth column "avg.#arcs" shows the average number of the circular arcs in an irregular shape, the sixth column "avg.#holes" shows the average number of holes in an irregular shape, and the seventh column "degrees" shows the permitted orientations, where we note that the permitted orientations are common to all pieces in each instance.
We converted these instances into the raster model with five different resolutions; i.e., we set the width W of the container C to 128, 256, 512, 1024 and 2.11 × 10 4 8.21 × 10 4 3.23 × 10 5 1.28 × 10 6 5.10 × 10 6 2048 pixels. Tables 2 and 3 show the instance size of the raster model for the instances (i.e., the number of pixels for representing all the irregular shapes) and the memory usage when running GCDH, respectively. Figure 12 shows the best solutions for a test instance in different resolutions. We note that the rasterized shapes (especially in the low-resolution, e.g., W = 128px) are rather different from the original shapes in the vector model, which often leads different optimal solutions. For example, diagonal straight lines in the vector model are often converted into jagged ones in the raster model of the low-resolution, which prevents the pieces from sticking together precisely. We also encounter other cases that complicated shapes in the vector model are converted into simple ones in the raster model of the low-resolution, which enables GCDH to perform efficiently and attain better solutions. We accordingly use the instances of the middle-resolution W = 512px in the comparison with the other algorithms for the vector model. Tables 4 and 5 show the computation time of the preprocessing in seconds for generating NFPs for all pairs of different shapes and detecting corners for all generated NFPs. Figures 13 and 14 show the evolution of density (%) when GCDH running 10 times on Swim instance with the time limit of 18000 seconds. The GCDH starts from an initial solution of the density 42.47% obtained by the construction  algorithm (Algorithm 5), and then rapidly improves it within several seconds to attain the density over 70%. We observed that GCDH performs similarly for other instances. We first evaluated the effect of the corner detection technique on GCDH. We tested GCDH 10 times for each instance with the time limit of 1200 seconds for each run, which is the same for all resolutions and does not include the computation time of the preprocessing for generating NFPs and detecting their corners (Tables 4 and 5). Tables 6 and 7 show the number of calls to CDH (Algorithm 3) with and without the corner detection, respectively. Table 8 shows the improvement in the computational efficiency of GCDH via the corner detection with respect to the number of calls to CDH, where that of GCDH without it is set to one. The GCDH with the corner detection was 14.08 times faster on average than GCDH without it for the instances of high-resolution (W = 2048px). Figures 15 and 16 also show the average number of calls to CDH for all instances. The computational efficiency has been much improved by the corner detection technique; i.e., the numbers of calls to CDH decrease in proportion to the cube root of the value of W roughly, while those decrease in proportion of that of W without the corner detection. Tables 9 and 10 show the computational results obtained by GCDH with and without the corner detection respectively, which are evaluated in the best and average density (%). Table 11 shows the improvement in the average density (%) of GCDH via the corner detection. The improvement in the computational efficiency also brought that in the quality of the obtained solutions; e.g., GCDH with the corner detection attained 2.02 point better than GCDH without it in the average density (%) for the instances of high-resolution (W = 2048px).
We next compared the computational results of GCDH (W = 512px) with those reported by Umetani et al. (2009)  (2010) (denoted as "BLF"). The FITS, GCS and ROMA are improvement algorithms for the polygon packing problem, where ROMA constrained the search space to place the pieces on a grid as same as Mundim et al. (2017), and computed the penetration depth efficiently via a variant of discretized Voronoi diagrams called the raster penetration map. Burke et al. (2010) considered an extended vector model incorporating circular arcs into polygons and proposed a robust orbital sliding method to compute NFPs. They developed a BLF algorithm that restricted the search space on vertical lines with sufficiently small gaps between them and incorporated it with local search and tabu search (TS) algorithms to find a good order of given pieces. Table 12 shows their compu- tational results, which are evaluated in the best and average density (%). We note that it is not appropriate to directly compare the density of these algorithms, because GCDH was tested on the instances of the raster model while the other algorithms were tested those of the vector model. Table 13 shows the computational environment and the computation time (in seconds) of the algorithms. Burke et al. (2010) tested four variations of their algorithm, and each were run 10 times. Their results in Table 12 are the best results of 40 runs. They did not use time limit but stopped their algorithm by other criteria, and their computation time in Table 13 is the time spent to find the best solution reported in Table 12 in the run that found it; i.e., the time only one run is reported. Umetani et al. (2009), Elkeran (2013 and Sato et al. (2019) tested their algorithms 10, 20 and 30 times for each instance, respectively, where they set the time limits for each run as shown in Table 13. The GCDH obtained comparable results to other algorithms for the first set of instances despite it was tested on the raster model of high-resolution, where we note that the test instances of the vector model are represented by a small number of line segments. It also obtained that the better results than BLF for the second set of instances. Figures 17 and 18 show the best layouts obtained by GCDH for these instances (W = 512px). These computational results illustrate that GCDH attained a good performance for a wide variety of ISPs including circular arcs and holes.
We last evaluated the performance of GCDH (W = 512px) and FITS on large-scale instances that were generated by copying the irregular shapes of the instances Fu and Mao. We tested GCDH and FITS 10 times for each instance with the time limit of 3600 seconds for each run. Table 14 shows the computational results of GCDH and FITS on large-scale instances, which are evaluated in the average density (%) and the number of calls to CDH. The number after the name of each instance shows the number of copies; i.e., "Fu2" contains two copies of every piece in "Fu" and hence the number of pieces is doubled. Figures 19 and 20 also show the number of calls to CDH. The GCDH and FITS take similar tendency in the computational efficiency for the increase of pieces, and their numbers of calls to CDH decrease in proportion to the square of the number of pieces roughly.

Conclusion
We develop coordinate descent heuristics for the the irregular strip packing problem (ISP) of the rasterized shapes that repeats a line search in the horizontal and vertical directions alternately. The rasterized shapes provide simple procedures of the intersection test without any exceptional handling due to geometric issues, while they often require much memory and computational effort in high-resolution. To reduce the complexity of rasterized shapes, we first propose the double scanline representation by merging consecutive pixels in each row and column into strips with unit width, respectively. Based on this, we develop an efficient line search algorithm incorporating a corner detection technique used in computer vision to reduce the search space. Computational results for test instances show that the proposed algorithm obtains sufficiently dense layouts of rasterized shapes in high-resolution within a reasonable computation time.