APPLICATION OF FIXED POINT THEOREM FOR DIGITAL IMAGES

Abstract


Preliminaries
Let X be a subset of Z n for a positive integer n where Z n is the set of lattice points in the n-dimensional Euclidean Space and β represent an adjacency relation for the members of X.A digital image consists of (X, β).Definition 2.1: Let β, n be positive integers, 1≤ β≤ n and p, q be two distinct points p = (p 1 ,p 2 ,,…,p n ), q = (q 1 ,q 2 ,,…,q n ) Z n p and q are β adjacent if there are at most β indices i such that‫|‬p i -q i | < 1 and for all other indices j such that ‫|‬pjqj ‫|‬ ≠ 1 , pj = qj .The following statements can be obtained from definition 2.1 For a given p∈ Z n ,the number of points q ∈Z n which are β adjacent to p is denoted by k = k(β, n).It may be noted that k(β, n) is independent of p6.

fig. (c))
In general to study n-D digital image, if 1 ≤ β ≤ n then k = k(β, n) is given by the following formula Suppose X is a non-empty subset of Z n , 1≤ β≤ n, k = k (β, n) Then (X, β) is called a digital image with β-adjacency 13 .We also say that (X, β) is called n-D digital image .[8] 1113 Definition 2.2: Let X ⊂ Z n , d is the Euclidean metric on Z n .(X, d) is a metric space.Suppose (X, β) is a digital image with β−adjacency, then (X, d, β) is called a digital metric space.[14] Definition 2.3:A sequence {x n } of points of the digital metric space (X, d, β) is a Cauchy sequence if there is M ∈N such that, d(x n , x n ) < 1 for all n, m > M.
Theorem 2.1: For a digital metric space (X, d, β), if a sequence {x m }⊂ X ⊂ Z n is a Cauchy sequence, there is M, N such that for all n, m > M, we have x n = x m .
Definition 2.4: A sequence {x n } of points of a digital metric space (X, d, β) converges to a limit L ∈X, if for all ∈> 0, there is M N such that d(xn, L) <∈for all n > M Proposition 2.1 : A sequence {x n } of points of a digital metric space (X, d, β) converges to a limit L∈X if there is M∈ N such that x n = L, for all n > M. (i.e.x n = x n+1 = x n+2 = ⋯ = L).
Now we prove a fixed point theorem on θ-contraction.
Theorem 3.1 Suppose (X, d, β) is a digital metric space, T: X ⟶ X and θ∈ Θ. Suppose d (Tx, Ty) ≤ θ (d(x,y)),for all x, y∈X.T is called a digital θ-contraction.Then T has unique fixed point.
Therefore d(x n+1, x n ) is strictly decreasing sequence.Therefore x n = x n+1 , for large n.By Lemma (2.1) Therefore x n is a fixed point of T, for large n.Uniqueness of fixed point of T .Suppose u and v are fixed points of T.
Hence T has unique fixed point.
Corollary 3.1 (Banach Contraction Principle in Digital Metric Spaces) Let (X, d, β) be a digital metric space and T ∶ X⟶X be such that d(Tx, Ty) ≤ λd(x, y) for all x, y ∈X and for some λ∈[0, 1).Then, T has unique fixed point.Proof: Take θ(t) = λt in Theorem 3.1.Then we get the result Theorem 3.2Let (X, d, β) be a digital metric space and T ∶ X⟶X such that d(Tx, Ty) <(x, y), for all x, yϵ X, x ≠y Where(x, y) = max{ } Then T has unique fixed point.

Uniqueness of fixed point of T
Let u and v are two fixed points of T. Therefore Tu = u and Tv Hence there exists unique fixed point.This completes the proof.

An Application of Fixed Point Theorems to Digital Images
Many mathematicians are interested in fractal geometry.Mandelbrot was the first to use the term "fractal."A fractal is a geometric shape in which each part is a reduced copy of the whole.The following are some fractal examples.Fractals are used to approximate many real-world objects, such as coastlines, mountains, trees, and clouds.Mandelbrot's book, The Fractal Geometry of Nature [16], sparked widespread interest.
Fractal image compression techniques are varied, but they represent only a small portion of all compression methods available.If the basic concept of an image compression technique is to utilise the self-similarities that naturally occur in many photos, it is considered fractal.A part of an image can frequently be found that, if altered in some way, would fit into another part of the same image.
Fractals are often described as self-similar objects; that is where the "fractal" part comes from.Benoit B. Mandelbrot developed the term "fractal" from the Latin word fract or fractus, which means "broken" or "uneven."Instead than attempting to define a fractal precisely, one may take an alternative approach to the problem.We might build a list of features that characterise fractals, as Falconer states in "Fractal geometry mathematical foundation and application" [20] .A fractal set F may not have all of the following features, but at least some of them: 1. F is self-similar on some scales; 2. F is detailed on all scales; and 3. F's fractal dimension (specified in some way) does not have to be an integer in general.4. F is usually described in terms of a simple algorithm.
A typical example of a fractal is the Sierpinski triangle.It's made by starting with an equilateral triangle (T 0 ) and locating the triangle's midpoints on each side.A new triangle is made by drawing lines between these locations, and this new triangle is removed by removing the lines.Inside the original triangle, we now have three additional equilateral triangles (see Figure 3).T1 stands for triangles.The next iteration is formed by repeating this process for each of the new triangles, and so on.TheSierpinski triangle is constructed by repeating it an unlimited number of times.The set is similar to the Cantor set.In general the area of T n is a n = Hence, the area of the Sierpinski triangle is lim n→∞ a n = 0 The circumference of Tn, where we also count the circumference of each hole, can be found by: 1. n=0 Each side of T0 is of length 1, which yields the circumference is l0 = 3. 2. n=1 T 1 consists of three equilateral triangles each with 1 2 the side length of T0.This gives us thecircumference l 1 = 3 3. n=2 Since it is a repeating pattern T2 consists of three times as many triangles as T1 and each have .
Here T consists of 3 n triangles with side length 2 n ,which tends to +∞ as n → +∞.Therefore, the Sierpinski triangle has area = 0 but infinite circumference.This seemingly strange phenomenon is explained by the third property of fractals mentioned above.A common illustrative notion of fractal dimension is the so-called box dimension of a set.It is a scaling relationship in which the number of boxes required to cover a set scales with the side length of the boxes.Figure 4 shows that the Sierpinski triangle is covered by 4 boxes with side length In many cases, the box dimension is equal to many other notions of dimension, so d is referred to as the set's fractal dimension.With this in mind, it appears more likely that the Sierpinski triangle has no area but is more than just a curve, as its dimension is a number between 1 and 2. There are other definitions of fractal dimensions besides the box dimension, such as the Hausdorff dimension, which is more mathematically convenient.The fractal dimension is an intriguing topic, but it is not the subject of this thesis.More information, however, can be found in [20] for those who are interested.To begin, we allow X to represent any set.By defining X abstractly, we can discuss a wide range of different sets.We'll work with sets of sets and, to a smaller extent, sets of images later on.However, for readers who are unfamiliar with the concepts described below, it may be helpful to think of X as R or R 2 .

Definition4.1.
A metric space (X, d) is a set X together with a real-valued function d: X×X→R.such that for any x, y, z ∈ X, the following holds: Thus a function d is called a metric.
It's important to note that a space doesn't have to have its own metric.For example, we may measure the distance between two pointsx = (x 1 , x 2 ) and y = (y 1 , y 2 ) in the space R 2 by, This real-valued function clearly meets the requirements of Definition 4.1 (i)-(iv), indicating that it is a metric.On the other hand, we have the "Taxicab metric," which is occasionally referred to as such.
It satisfies the properties as well.As a result, both d 1 and d 2 are metrics in the R 2 space.d 1 in its most general form is de(x, y) = (( and This metric d e will be referred to as the Euclidean metric.Definition 4.2.A sequence {x n } in a metric space (X, d) is said to converge to a point x ∈ X, if given any ∈> 0, there exists an N ∈N such that d(x n , x) <∈ whenever n > N. In other words, when one proceeds further down a Cauchy sequence, the points become closer and closer together.They do not, however, need to arrive at a specific location in space X.Consider the metric space (Q, d), where Q denotes rational numbers and d denotes the Euclidean metric.The integer e is not in Q, so the sequence a = (1 + ) n converges to e .As a result, the following definition is appropriate: Definition 4.4.A metric space (X, d) is complete if every Cauchy sequence in X converges in X.The concept of compact subsets is another crucial topic in the development of fractal theory.To grasp the concept of compact subsets, one must first recall what a subsequence is.Consider a sequence {x n }; a subsequence {x nk }can be created from {x n } by deleting some or all of the components while keeping the order of the remaining elements the same.For example, the sequence 1/2, 1/4, 1/6,... is the even denominator subsequence of the sequence {1/ n } n=1 ∞ , where n= 1 = 1, 1/2, 1/3,... Definition 4.5.Let E⊂ X be a subset of the metric spaceX in (X, d).If every infinite sequence in E possesses a subsequence that converges to an element in E, then E is said to be compact.The concept of compact subsets might be a bit odd to the reader who has not encounter it before.Therefore we will state some other definitions regarding subsets of metric spaces, together with a theorem, that will help with the intuition of compact subsets.
Definition 4.6.A subset E of a metric space (X, d) is open if, for each point p ∈ E there is some r > 0 such that {q ∈ X : d(p, q) < r} is contained in E. Definition 3.6.A subset E of a metric space (X, d) is closed if the complement of E, denoted E c , is open.Remark.A closed set contains all its limit points. 1119 The corresponding IFS is given by {R 2 : w 1 , w 2 , w 3 } where the contractive transformations w 1 ,w 2 and w 3 are given by w The attractor T of this IFS is the Sierpinski triangle and is given by T = lim n→∞ W •n (T 0 ).
Theorem 4.2 states that the attractor of an IFS is unique and given an IFS it does not matter what the initial set is, in the end iterates of W will tend to its attractor.To answer the inverse, how one should go about finding an IFS for a given attractor the Collage Theorem proves useful.
Theorem 4.3 (The Collage Theorem, (Barnsley)).Let (X, d) be a complete metric space.Let L ∈ H(X) and ∈ ≥ 0 be given.Choose an IFS {X : w 1 , w 2 , . . ., w n } with contractivity factor 0 ≤ c < 1, such that where h is the Hausdorff metric.Then, where A is the attractor of the IFS.
This results follows from the contraction mapping theorem.If one considers f to be a contractionmapping with unique fixed point x f , and let x ∈ X be such that d(x, f(x)) <∈ for a given > 0. Then In Example 4.1 above we identify the IFS by analysing which transformations are needed for constructing the Sierpinski triangle.

Fractal Image Compression
In the physical world, a photo is simply a piece of paper with an Inc. on it, but in the digital world, a photo is a collection of small logical units called pixels.The number of pixels in a digital image is referred to as its resolution.The resolution of an image is n x m if it is n pixels wide and m pixels high.A digital image with a resolution of n xm can be thought of as a n x m matrix, with each entry representing a pixel.The colour of a pixel is determined by the value of an entry, the pixel value.The number of distinct colours that a pixel can represent is determined by the number of bits per pixel used.For example, 8-bit colour allows for the display of 2 8 different colours.A bit (short for "binary digit") has a single binary value of "0" or "1," and can only answer "yes" or "no" questions.Each pixel in a grayscale digital image is frequently 8 bits wide, which means that a grayscale digital image with a resolution of 1024x1024 requires 1024.1024.8 = 8.4 .10 6 bits to store.The storage size required to save an image will be referred to as the image's memory size.Memory is typically measured in bytes, with one byte equalling eight bits.1120 Despite the fact that high-speed Internet access is spreading around the world and connection speeds are increasing, it is still limited.The time it takes to send a data file is determined not only by the connection speed, but also by the file's memory size.As a result, sending a high-resolution image or a collection of images may still take some time.Compressing the images reduces the amount of data that must be transferred, as well as the time it will take.But how can images be compressed?The human eye's sensitivity to a variety of information losses is an important characteristic.In other words, an image can be altered in ways that the human eye is incapable of detecting.If there is a lot of redundant data that doesn't affect the "big picture," the data can be greatly compressed.Lossy compression methods are those that lose some information during the compression process, whereas lossless compression methods do not lose any original data .
The basic idea behind fractal image compression is to store (also known as encode) images as a set of transformations.To be useful, there must be a method to decompress the image, i.e., a method to reconstruct the image from the stored information.The decompression (or decoding) process entails repeatedly applying the transform to an arbitrary starting image, resulting in an image that is either the original or, in most cases, very similar to it.Instead of pixel values, each image in Figure 7 can be saved as a collection of affine transformations.If the numbers in the transformations are of the commonly used data type "float," then each number has a memory size of 32 bits.Storing the tree as a collection of transformations, for example, only requires 4 transformations x 6 numbers x 32 bits per number = 768 bits.Storing it as a collection of pixels, on the other hand, necessitates 512 .512 .1=262.144 bits for the resolution of 512 x 512.(Since it is only black and white, we only need 1 bit to store the color).With this in mind, one might wonder if it is possible to find a small number of affine transformations that represent any given image.The answer is simply no, because a natural image is not exactly self-similar, but it is also not completely devoid of self-similarity.As previously stated, when looking at an image, one may notice a portion of it that, when scaled and rotated, fits into another portion of the same image.Self-similarities of this type can be found in most images of faces, cars, mountains, and so on.To make use of these similarities, we must partition the image in some way and compare the bits and pieces.

Metric Spaces of Images
To use the main results from the previous sections, we need a complete metric space.A grayscale image mathematical model is a function f: S→G, where the domain S represents points on paper and the range G is the colour of the points.For the sake of simplicity, we will assume that S is a closed rectangular region in R 2 and that f(x, y) ∈ G exists for every (x, y) ∈S, where G represents a closed interval of grayscale values ranging from black to white.Using the function f (x, y), we can generate a 3D-graph where the height represents the grey level at each point (x, y) on the paper.We define the metric in order to be able to say anything about the differences or distance between two images, f and g. d * (f, g) = f x, y − g(x, y) 2 dx dy If we define F to be the space of real-valued square-integrable functions f : S → G, then F together with the metric d * forms a complete metric space.[6] RecallthatW : where c is called the contractivity factor of W .Then, by the Contraction mapping theorem, there exists a unique fixed point f W ∈F satisfying W(f W )=f W .
We can state the Collage Theorem for grayscale images in the following way.Let f be a grayscale image and assume that W : where c is the contractivity factor of W, fW is its fixed point and W •n (f0) → fW ≈ f for any initial image f0.

The Fractal Block Coding Algorithm
The Collage Theorem is the underlying principle of fractal image compression.The theorem guarantees that we can find a fractal representation of an image if we can find a contraction mapping on F. In light of this, A. Jacquin presented the basic fractal block coding algorithm in 1992 [4].The algorithm begins by segmenting the image into m non-overlapping range blocks Ri (1 ≤ i ≤ m).These range blocks can be thought of as functions Ri : Ri → G from the "spatial part" Ri ⊂ S of the range block Ri to G. Then another partition of the image is made, this time into n non-overlapping domain blocks Dj (1 ≤ j ≤ n).In the same way Dj : Dj → G are functions from the "spatial part" Dj of Dj to G.Each domain block has in general twice the side length of the range blocks .An illustrative example can be seen in Figure 8.
Given these two partitions the fractal block coding algorithm search, for each range block Ri, the best match amongst the domain blocks.Since it is unlikely all range blocks have a good match amongst the domain blocks we are allowed to modify said domain blocks.This can be done by shifting the grayscale value of the entire domain block by a constant β, and scaling each grayscale value by a constant α.In the end we have, for each range block Ri a matching domain block Dj(i) together with the αi and βi values for the match.The list of triples ("index of domain block",α,β) will form the encoding of the image.
Then Wiis a contraction mapping with respect to d * , if |αi| < 2. Proof : This means that if < 1, then Wiis a contraction.Since the Ri's (1≤ i≤ m) forms a partition of S, we can define W :F →F by If we choose αisuch that Wiis a contraction for all i∈ {1, 2, . . ., m}, then by the Contraction Mapping theorem, through iteratively applying Wito any starting image f we will recover the fixed point fW i .If we now define f W as the sum of all f Wi , we have the following theorem. .
f for any f ∈ F, then there exists a constant γ depending on f such that

Fractal Compression of Grayscale Digital Images
A grayscale digital image can be represented by a function ˜f : {1, 2, . . ., n} × {1, 2, . . ., m} → {0, 1, . . ., 255}, and as mentioned earlier, when working with digital images (which are of fixed size, say n × m) we can think of them as matrices.We let | ˜fi, j | for i = 1,. . .n and j = 1. . .m, be a matrix where each entry ˜fi, j = ˜f (i, j).This gives us a way to compute the difference of two digital images with the so called rms (root mean square) | For the sake of simplicity, we will consider square images or matrices, such that m = n for some n = 2 p .Each rangeand domain-block now represents a submatrix, but the domain blocks are still twice as large as the range blocks.A domain block and a range block must be the same size in order to be compared.This is accomplished by averaging the domain block's pixel values and reducing its size to that of a range block.The fractal block coding algorithm for grayscale digital images can now be stated as follows: Algorithm 1 Fractal Block Coding 1: for Ri (1 ≤ i ≤ m) do 2: for Dj (1 ≤ j ≤ n) do 3: Downscale Dj to match the size of Ri and call it Dˆj .4: Find best α and β for the pair (Ri , Dˆj ) using rms.5: Compute the error using rms, and if the error is smaller then for any other domain block, remember the pair along with the α and β. 6: end for 7: end for This algorithm is a basic fractal image compression algorithm.This method has many variations and improvements, but the basic idea remains the same.A minor change to improve the match between the range and domain blocks is to rotate and flip the domain block.We chose to call this method the enhanced fractal block coding algorithm.
The image is partitioned in two ways, just like the original algorithm.The first partition is made up of nonoverlapping domain blocks, while the second is made up of non-overlapping range blocks (see Figure 6).
Then for each range block Ri we find the domain block together with a transformation that is closest to Ri .The transformation tested for each domain block includes: • Flipping; • Rotating; • Changing contrast and brightness.
Flipping is simply a reflection of the scaled down domain block and the rotations includes rotating the block 0 • ,90 Find best α and β for the pair (Ri , Dˆj,k) using rms.6: Compute the error using rms, and if the error is smaller then for any other Dˆj,k, remember the pair Ri Dj along with the rotation, flipping, α and β. 7: end for 8: end for The decoding for both algorithms is nearly identical, with the only differences being the saved parameters.We begin by generating an arbitrary image of the same size as the original image, and then apply the transformations corresponding to the saved parameters a fixed number of times iteratively.If we have the restriction |α| < 2 we ensure that each transformation is a contraction, and then by Theorem 5.2 the decoded image will be close to the original.
When working with lossy image compression (such as fractal image compression), it can be useful to be able to measure the quality of the decompressed image.One common method is to compute the peak signal-to-noise ratio (PSNR).To do so, we must first calculate the mean square error (MSE).For the original m × n image f and the lossy compressed image f * the MSE is defined as: MSE =

MSE
It is important to note that the PSNR only measures the overall difference between two images' pixel values.In other words, it says nothing about how the human eye will perceive image quality..A high PSNR is considered better as we aim to have a small mean square error between the original and the approximated image.
Implementation and Results the two fractal compression algorithms presented above were implemented in Python.To test and compare the algorithms, experiments on two different types of images were conducted.The image is a QR code with the message" Fractal image compression".The original images can be seen in Figure 7.By Theorem 5.1 we need to restrict |α| < 2 to ensure a contraction.However, in practice it is useful to restrict |α| further to reduce the number of iterations needed before reaching the fixed point.This might affect the quality of the image a bit but it guarantees that the sequence of images converge faster and only a few decoding steps are necessary.The memory size of an image compressed by either of the presented algorithms does not depend on the original image per se.What determines the memory size is the number of range-and domain blocks we choose together with the choices of α and β.For the enhanced method we also need to take the rotation and flipping into account.The test images in Figure 9 both have resolution 512 × 512, so the memory size after compression will be the same when using the same range block size.Two different sizes of the range blocks were tested and in the first case they were 16 × 16 with the corresponding domain block size of 32×32.Since the resolution of the test images are 512×512 we have a total of 256 domain blocks of size 32 × 32, so the index of the domain block requires 8 bits.In the second case we have range blocks of size 8 × 8 and domain blocks of size 16 × 16 which result in a total of 1024 domain blocks, so here 10 bits are required for the index.The α's were chosen to be any combination of c 1 , where ci ∈ {0, 1} for i = 1, 2, 3, 4. Thus, 4 bits were used representing them.The β's on the other hand were chosen to take any integer value between −255 and 255, which is a total of 2 9 − 1 different values.Therefore, 9 bits were needed in the representation of the β's.
The image created by the standard fractal block coding algorithm is stored as 1024(8 + 4 + 9) = 21504 bits = 2688 bytes for the larger block sizes and 4096(10 + 4 + 9) = 94208 bits = 11776 bytes for the smaller block sizes.As mentioned, the enhanced method also needs to store information about the rotation and flipping of the domain block. 1 bit is used for the flipping and 2 bits are used for the rotation.This results in the total memory size of the images with the larger block sizes as 1024(8 + 1 + 2 + 4 + 9) = 24576 bits = 3072 bytes and for the smaller block sizes 4096(10 + 1 + 2 + 4 + 9) = 106496 bits = 13312 bytes.By dividing the memory size of the original images (which is 512 • 512 • 8 = 2097152 bits = 262144 bytes) with the memory size of the compressed images, we get the compression ratio.For example, the compression ratio of the standard fractal block coding algorithm with range block of size 16 × 16 is The original image is segmented into parts such that each part is nearly the same as a reduced copy of the original image.The union of all the segments is then close enough to the original image.Thus the images with global self similarity are encoded with extreme efficiency .Unfortunately, a general image is not always globally self similar.In such images, self similarity exists only locally amongst different small parts of it.See the following image.
It has been observed that all the images in nature contain a considerable amount of affine redundancy.The affine redundancy means large segments of the image look like the small segments of the same image.Large segments are known as domain blocks whereas small segments as range blocks.We can find an affine transformation (a combination of rotation, reflection, scaling and shifting transformation) that transforms a domain block to the suitable range block.The parameters of the transformation constitute a fractal code.Thus a range block is approximated by applying an affine transformation on a suitably chosen domain block.Since the mapping reduces the size of the domain block, it is a contractive mapping.Fractal image compression works as follows: 1.The image is partitioned into non-overlapping range blocks.Generally, the partition of an image may have any arbitrary shape (square, rectangles, triangles, quadrilaterals or any polygon.2. The same image is partitioned into overlapping domain blocks.Domain blocks are larger in size than the range blocks in order to maintain contractive condition.3. Finally the image is encoded by using a suitable affine transformation which maps a domain block to a best fitted range block.4. To achieve the decompression, exactly the opposite is done.Inverse affine transform is applied to recover the image.Usually 8 to 9 inverse iterations are applied on the encoded image to decode the image.The iteration starts with any arbitrary image.Successive application of the affine map gives the sequence of images that ultimately converge to a fixed image (by fixed point theorem of Banach).

Conclusion:-
QR code consist of mainly very dark or very bright grayscale colours each "error" is quite notable, especially for the larger block sizes.The 4 fractal images of the QR code in Figure 8 are successfully readable by a QR code scanner.Our purpose is to give the digital version of Banach fixed point theorem by introducing θ-contractive type mapping.These results are the applications of fixed point theory in digital metric space.It will be useful for digital topology and fixed point theory.In the future, we will also use the fixed point theory to solve some problems in digital images.Finding the best match for each range block is the most computationally intensive part of the algorithms.Because the enhanced method includes 8 variants of each domain block, the encoding part's run-time is approximately 8 times longer than with the current implementation.This is significant because encoding is already a lengthy procedure, especially for smaller block sizes.In theory, the improved method should produce better (or at least comparable) results than the standard fractal block algorithm.The PSNR results back this up, but the memory tradeoff may not be worth it in most cases.
Fractal image compression, as previously stated, is a lossy compression method.The Joint Photographic Experts Group, or JPEG, is a more well-known lossy compression method.Even though the fractal block algorithm has a high compression ratio at times, the long encoding time is a significant disadvantage.Other fractal image compression methods attempt to address this shortcoming in order to make fractal compression a more competitive option.However, existing fractal methods are still regarded as time-consuming compression methods when compared to, say, JPEG.
Thus the contractive mappings and fixed point theorem is at the core of the fractal image compression.Important aspect of the fractal image decoding is resolution independence.That means we may compress a 128 × 128 image, and decompress it to any size, say 64 × 64 or 256 × 256.Fractal image .compressionproduces better reconstructed images than that of JPEG (Joint Photographic Expert Group) technique.

1 2 4 (log 2 ,
in (a) and by 12 boxes with side length 1 b) In general if N boxes with side length ∈ is needed to cover the Sierpinski triangle then it takes 3N boxes with side length ∈ 2 to cover it.Since the number of boxes increase by 2 d , where d = log 3 we say that this number d is the box dimension of the Sierpinski triangle.

Figure 4 :
Figure 4:-Illustration of boxes it take to cover the Sierpinski triangle.

Definition 4 .
3.A sequence {x n } n=1∞ in a metric space (X, d) is called a Cauchy sequence if, for any ∈> 0 there is an N ∈N such that d(x n , x m ) <∈∀n, m > N.

Figure 6 :- 2 (
Figure 6:-Example of 64 range blocks of size B × B and 16 Domain Blocks of size 2B × 2B.Let αiand βidenote the best grayscale scaling and shift respectively and let vi: D j(i) → Ridenote the unique affine map of the form vi(x,y) = 1 2 (x,y)+(a,b), mapping D j(i) onto Rifor 1 ≤ i ≤ m.Then we have the following theorem: Theorem 5.1.Let Wi:F →F, for 1≤ i ≤m, be defined as

Theorem 5 . 2 .
Suppose c .= max |αi/2| < 1.Let f Wi denote the unique fixed point of the contraction mapping W i (i = 1,... m), and let f W = fw i m i=1 i, j) − f * (i, j)| 2Since we are working with 8-bit grayscale images the largest pixel value is 255, given this together with the MSE the PSNR is defined as: PSNR = 10 • log 10 2552

Figure 7 :
Figure 7:-Original image of the QR-code.

Fig 8 :-
Fig 8 :-Results of both fractal compression methods for the QR code with two different range block sizes.The PSNR, memory size and compression ratio is presented for each compressed image.
• ,180• or 270• .We have eight variants of each domain block to compare each range block with because we can flip or not flip the domain block and then rotate it in four different ways.Then for each range block Ri we find the 1123 version of the down scaled domain block Dˆj(i) , together with a contrast scaling constant α and a brightness controlling constant β, which have the lowest root mean square error.i.e. finding the function and domain block wi(Dj(i)) = α × rotate( flip( Dˆj(i))) + β that minimizes with ˜f = Ri and g˜ = wi(Dj (i)).The algorithm can be stated as following: