Research on variable rate fertilisation machine based on big data analysis

In order to improve the e ﬀ ect of intelligent fertilisation, according to actual fertilisation needs, this paper combines big data technology to perform data analysis, and designs a variable fertilisation mechanical structure that can be used for mixed fertilisation to realise mixed fertilisation of multiple crops. Moreover, from the perspective of precise fertilisation, this paper combines the actual needs of variable fertilisation to construct a control system, and adopts an adaptive federated ﬁ lter structure to solve the in ﬂ uence of equipment errors on system position estimation. In addition, this paper solves the problems of excessive partial application and low fertiliser utilisation in traditional fertilisation, as well as the resulting waste, environmental pollution, and poor quality of agricultural products. Furthermore, this paper designs and manufactures a dual-variable application test device, and conducted a fertiliser discharge test on the test device. Finally, this paper uses big data technology to analyze the experimental data of variable fertilisation operation machinery. From the experimental research, it can be seen that the e ﬀ ect of the fertilisation operation machinery designed in this paper meets actual needs.


Introduction
Variable fertilisation seeding technology mainly involves two levels of fertilisation seeding strategy and variable fertilisation seeding execution system.Among them, fertilisation seeding strategies mainly include three methods: prescription map-based strategy, real-time sensor-based strategy, and expert decision-making system-based strategy.
Target rate information for ecological zones inside one area is included on prescription map data.They are utilised through the panel to dynamically alter the goal rate while working in every administration area.On the map, a frequency of '0' in the formulary is shown by black in colour.Support for real-time sensor networks for intelligent automation and edge-based analyses.To perceive as well as transfer information to the data center, a real-time tracking programme is necessary.The primary data center must analyze and respond to acquired information in real-time.An intelligent computer programme that replicates the judgement as well as action of an individual else an organisation with expert knowledge in addition to expertise in a certain subject.
The variable fertilisation seeding execution system mainly involves speed measurement, positioning, fertilisation seeder structure and control methods.At present, precision agriculture involves many aspects of agricultural production, including variable fertilisation, precision seeding, crop pest control, weed control, and water management.Precision seeding and precision fertilisation (variable fertilisation) technology is an important part of precision agriculture (Gondchawar and Kawitkar 2016).Precise irrigation is a technique of fertilising in agriculture that entails planting grain at a specific distance and thickness.To make rows, they all uncover the dirt, plant the seed and then covered it by utilising a range of activities.Precision seedier are also available for sowing bundles of embryos for indoor seed beginning.Among them, precision seeding technology can improve the quality of seeding, reduce the labour intensity of farmers, and at the same time improve the efficiency of agricultural production and increase agricultural production and income (Suma et al. 2017).Secondly, precise fertilisation can save fertiliser, reduce environmental pollution, and improve crop quality and yield.Therefore, by studying precision seeding and precision fertilisation technology, it is possible to reduce agricultural production costs and improve farmers' production conditions, which has great social and ecological benefits, and is of strategic significance in modern agriculture.
The precision agriculture process consists of three parts, one is the collection of various information in the farmland, the second is the information collected by decision-making and management, and the third is variable operations.The specific implementation method is generally as follows.First, the agricultural machinery is equipped with a GPS system and sensors that can measure basic information of the farmland.When the agricultural machinery is working, the positioning information and other farmland information are stored in the data storage card, and the value in the data card is read through the field computer, and the data is processed to form a prescription map of farmland information (Ray 2017).Variable fertilisation is an important part of precision agriculture.The functions of variable rate fertilisation are precision farming practice that involves applying fertiliser to a sector in such a way that the amount of fertiliser application is depending on the actual position of the substance which requires to be fertilised else perhaps the characteristics of the appropriate position that the fertiliser actually applies on.The soil is sampled and tested for satellite positioning in the field, and the content of nitrogen, phosphorus, potassium, and other trace elements in the soil nutrients is confirmed according to the soil test data.Moreover, with other environmental conditions of the farmland as a reference, a farmland fertilisation management prescription map is generated, and the agricultural machinery performs variable operations according to the generated prescription map to realise the precise input of chemical fertilisers and the precise management of agricultural production.Among them, variable operation is an important part of realising precision agriculture.
Through the research of a mixed variable fertilisation machine, it can not only ensure accurate variable fertilisation, but also accurately proportion the three main nutrients.In the research of mixed variable fertilisation machinery, first, according to the soil characteristics of this farmland and the difference of crops, the three main nutrients can be proportioned according to the needs, and then mixed evenly and then applied to the farmland.In this way, neither waste of chemical fertiliser nor imbalance of nutrients in the field will be caused.At the same time, it greatly increases efficiency and meets the requirements of precision agriculture.

Related work
The sensor-based fertilisation strategy is mainly based on real-time detection of crop growth information and soil nutrient information to determine the nutritional status of crops and land fertility to adjust the amount of fertilisation (Roopaei et al. 2017).At present, this method is still being explored and improved.The sensor-based fertilisation strategy requires the sensor to have the characteristics of low cost, high density, high precision, and real-time performance.At present, the main technology used is spectroscopy.Spectroscopy technology can significantly respond to specific elements of soil and vegetation in specific bands, so as to evaluate soil nutrient conditions, crop growth status, and guide fertilisation (Steenwerth et al. 2014).Rameshaiah et al. (2015) used 1300-2500 nm near infrared spectroscopy to determine the total nitrogen, organic matter, and total potassium soil nutrient content.Newell and Taylor (2018) used 400-2400 nm near infrared spectroscopy to determine the total nitrogen and total phosphorus content in the soil.Channe et al. (2015) used 300-1000 nm near infrared light to determine the chlorophyll and nitrogen content of crops.Fertilisation strategies based on spectral sensors are used in different forms.There are two main types: real-time on-board sensor fertilisation and RS technology fertilisation.Scherer and Verburg (2017) studied the technology of real-time variable fertilisation based on crop growth.The research is based on reality and fully takes into account the current status of the development of intelligent agriculture, and has developed a real-time automatic variable fertilisation control system with integrated data controller and optical sensor.Liu et al. (2018) studied a real-time automatic variable fertilisation system for winter wheat based on PLC control and remote sensing technology (high-altitude remote sensing).The system uses a normalised vegetation difference index measuring instrument to monitor the nutritional status of crops in real time, and the feedback spectrum information is combined with the travel speed of the fertiliser applicator to adjust the amount of fertiliser to achieve real-time variable fertilisation.The two fertilisation strategies based on spectral information have their own advantages and disadvantages.Among them, remote sensing technology is usually affected by weather conditions, which brings disadvantages to the promotion of this technology (Galiveeti et al. 2021).Vehicle-mounted sensors (also known as near-Earth remote sensing technology) can accurately detect the nutritional status of vegetation in small areas, which can make up for the lack of high-altitude remote sensing data that is easily affected by weather conditions, but data collection in large areas will be more time-consuming (Saravanan et al. 2015;Kimaro et al. 2016).
GPS speed measurement is divided into high-precision GPS and ordinary precision GPS.Among them, high-precision GPS has high accuracy, but it is expensive.The ordinary precision GPS module is cheap, but the speed measurement error is large, and the anti-interference ability is weak (Zougmoré et al. 2018) Terdoo and Adekola (2014) showed that there are differences between different fertiliser spreaders on the same plate.In the horizontal spreading mode, the coefficient of variation of the spreading uniformity is about 20%.Thakur and Uphoff (2017) researched and developed a variable fertiliser applicator suitable for paddy field operations.The variable fertiliser applicator uses a DC stepper motor to drive the fertiliser discharging device, and implements variable fertiliser operation according to the on-board computer and the operation prescription map (Slagter 2013;Manogaran et al. 2021).The experimental results show that the accuracy rate of nitrogen fertiliser and potash fertiliser in this system is 4.6%, and the accuracy rate of phosphate fertiliser is about 2.3%.In the research of real-time variable fertilisation technology based on light sensing, Chae and Cho (2016) used a light-sensing real-time variable fertiliser applicator called GreenSeekerl to detect and calculate the normalised difference vegetation index NDVI (Normalised Difference Vegetative Index) using light sources in the two bands of 660 nm and 780 um.The entire operating system uses CAN bus to connect various signal acquisition sensors and operating controllers.Aryal et al. (2020) proved that CAN bus can be used in distributed weed control system through research.Aliev et al. (2018) researched and developed a modular, multinode field information acquisition system based on CAN bus communication network.In the system, different nodes can collect GPS location information, ground speed information and simulated spraying data.Chandra et al. (2016) researched and developed a general, open and configurable agricultural machinery automatic control platform prototype system based on CAN bus.Faling et al. (2018) researched and developed an agricultural machinery operation control system based on CAN bus, which includes operation navigation control node, tractor three-point suspension mechanism control node and operation monitoring computer.The test results show that each control node connected to the CAN bus can perform control tasks at the same time.In existing graph sampling algorithms, the problems occur based on high accuracy, speed measurement and inconvenient for system control and so on.In the proposed by using GPS location and NS algorithm, we can easily find the location and shortest path of nodes.

The basic properties of the mechanical diagram model of variable fertilisation operations based on big data
The research object of this paper is graphs.Understanding and mastering the basic properties of graphs will help us better understand the causes of specific phenomena and find appropriate solutions to the problems in the algorithm.In view of this, this paper will give the basic properties of the graphs related to the research in this paper.
The average series of phases across the smallest pathways for all available pairings of network elements is calculated as the minimum separation distance in network topology.It is a metric used to evaluate the ability of data or transit systems on a network.While using given equations, we can compute the average route length of a graph: The distance of the route between two vertices is represented by d(vi, vj).As a result, we add the total amount of all shortest path algorithms between all vertices as well as divide the amount of all available routes by the n vertices.The reverse of an infinite series is a logarithmic function.A log functional as well as an exponential distribution both have the same foundation.An exponential is a logarithm (Manogaran et al. 2018).
The average path length of the small-world graph is proportional to the logarithmic growth function of the total number of nodes N in the graph, which is defined as (Alipio et al. 2019): represents the number of all neighbour points, and C(V i ) represents the clustering coefficient of point V i .Therefore, the average clustering coefficient can be defined as: A clustering coefficient is an indicator of the overall to which nodes in a network represents the proportion with each other in graph theory.The universal form was meant to provide an aggregate indicator of networking cluster, whilst the local version provides a signal of the involvement of specific hubs.The average clustering coefficient is a parameter describing the degree of clustering between nodes in the graph.Obviously, it is a number between 0 and 1.The closer it is to 1, the more the neighbours of this node have a tendency to 'group together', and it describes the tightness of the connections between the nodes in the graph.Through quantitative analysis, it is pointed out that most complex graphs have scale-free characteristics, which is one of the important research results in the field of graph modelling.
The study of scale-free characteristics helps to understand the reasons why each person's status in social networks is different.The article points out that everything from the real protein network to the Internet eventually converges to a graph model that obeys a power-law distribution.A network is said to be magnitude if its properties are independently of its scale, that is the set of nodes.And it is, even as the system increases, the basic framework stays unchanged.A power law in statistical data is a quantitative connection among two characteristics in which a comparative alteration in one number leads in a proportionate corresponding difference in another number, regardless of their original sizes: one number changes as a multiple of the other.It describes a phenomenon in which a minimal set of variables are grouped at the peak (else base) of a range, consuming 95% of the supplies.In other terms, it suggests that a limited occurrence is frequent, but bigger instances are uncommon.
Research points out that the scale-free feature means that the degrees of different nodes in the graph obey a power-law distribution, that is, the probability that a node is randomly selected and whose degree d is a natural number k is defined as: Based on the scale-free characteristics of the graph, that is, the degree of the nodes in the graph obeys the power-law distribution, which can infer that there are large differences between nodes.The research in this paper finds that this difference is the root cause of biased sampling in most existing graph sampling algorithms.At the time of information points of a probabilistic parameter gathered, sampling bias occurs which is to establish its dispersion are chosen improperly and do not reflect the real dispersion for non-random causes.In the follow-up research of this article, it is precisely because of the full recognition of this difference between nodes that the hierarchical mechanism based on node approximate degree distribution is designed and implemented.
Node centrality is one of the important contents in the research of complex graphs.It reveals the position of nodes in the graph.For example, in social networks, research on node centrality can help locate the most influential people in social networks.Among them, degree centrality is one of the most core metrics for node centrality.One of the simplest to determine is degree centrality.A node's degree centrality is just its extent number of connections it already has.The nodes are much more important if its degree is higher.Because several networks with high degrees also have significant relevance through other metrics, this can be an important method.Degree centrality refers to: the greater the degree of a node, the more important the position of the point in the graph.It is formally defined as (Verschuuren 2018): Among them, N represents the number of nodes in the graph, and k represents the degree of a certain node.
Both node centrality and scale-free characteristics reveal huge differences between different nodes in the graph.Remember that a node's degree is just the lot of social relationships (i.e.edges) it possesses.Increasing scores for centrality measures indicate that perhaps the node is more central.As previously stated, each centrality metric reflects a distinct sort of significance.The degree centrality of a person indicates how many relationships he or she has.
In the field of graph data analysis, the study of graph sampling algorithms has always attracted the attention of many scholars.It has been proved in data mining technology that sampling is an efficient data reduction method.Sampling is one among the strategies included in data reduction since it may decrease a huge data collection into an even more shorter test dataset.In addition, other data reduction methods include clustering and principal component analysis.After investigation, it is found that the existing graph sampling algorithms can be roughly divided into three categories:

R E T R A C
T E D Among the three types of graph sampling algorithms, the most concise and intuitive idea is the graph sampling algorithm based on point selection strategy.The most representative of this type of algorithm is NS (Node Sampling).The NS algorithm is roughly divided into three stages.First, the NS algorithm randomly selects a point uniformly and puts it into the sample point set V s .Then, the algorithm iterates the above sampling process until the number of points in the sample reaches the sampling size and the algorithm terminates.Finally, the algorithm derives the concept of subgraph, extracts all the edges where the source and target points are in V s , and puts them into the sample edge set E s , so that the sampling result S = (V s , E s ) is obtained.
The practice of extracting widely available as well as plainly aberrant mediastinal lymph nodes is referred to as sampling.The term sampling process alludes to the regular biopsies of lymphatic system at the researcher's chosen elevation's else locations.The idea of NS algorithm is simple and intuitive, but it cannot well retain a large number of topological properties of the graph when sampling.The NS algorithm cannot well retain the scale-free characteristic of the original graph.Sample nodes are utilised to choose a subset of information for examination else to indicate a percentage of information to reject.A wide range of sample categories, non-probability (organised) samples, comprising stratification and clustering are provided.
The main reason is: in a sampling process, the NS algorithm tends to extract more low-degree nodes, which makes the algorithm error larger.
ES (evolution strategy) and ES-i algorithms are typical representatives of random graph sampling algorithms based on edge selection strategies.Below we introduce these two algorithms separately and point out their problems.The uses of ES algorithm are to enhance the accomplishment through evaluating techniques on a dataset of the information, to choose sets of linked records else dealings for analysis, and to detect units' else situations for random inspection in the quality of interest assurance, protect, else fraud inhibition.
The core processing flow of the ES algorithm is: First, ES selects an edge uniformly and randomly, adds the edge to the sample edge set, then adds the two end points of the selected edge to the sample point set, and iterates the above process until the number of points reaches The algorithm terminates when the sampling size is reached, and the sampling result is finally obtained.This algorithm is proved to be easy to extract high nodes in a sampling process, because the number of edges corresponding to high nodes is relatively large.This leads to poor sampling performance of the ES algorithm.
The ES-i algorithm adds a step of exporting subgraphs on the basis of the ES algorithm.A large number of experiments have proved that the sampling performance of this algorithm is far superior to the ES algorithm, which also illustrates the importance of the concept of derived subgraphs in the sampling algorithm, but the ES-i algorithm also has the problem of excessive sampling of high nodes.
Analyzing the ES and ES-i algorithms, we can know that the performance of the ES-i algorithm is better than the ES algorithm mainly because ES-i uses the concept of derived subgraphs.The concept of derived subgraphs makes the connectivity of the entire network better.We can see that the sampling algorithm based on the point selection strategy and the edge selection strategy is simple, which makes them easy to extend from static graphs to flow graphs.
Sampling techniques based on graph topology are the most complicated and time-complex algorithms among the three types of sampling algorithms.Most of them are based on random walks of graphs.The FFS algorithm is a typical representative of this type of sampling algorithm.First, FFS selects a seed point uniformly and randomly.The seed point may be selected sequentially from the queue or obtained through random selection; then the breadth-first traversal technology of the graph is used to obtain the seed.When a pivotal point happens in every repetition, the breadth first search (BFS) method explores a network in breadth-ward movement as well as utilises a backlog to recall and to acquire other vertices to begin a search.
Then, generate a random number that obeys the geometric distribution of all neighbours of a point.The random number determines how many neighbours and corresponding edges will be added to the sample; then, using these selected points as seeds, repeat the above steps until The number of points meets the sampling scale; finally, the sampling result set is obtained through the step of deriving the subgraph.A large number of experiments have proved that in the

R E T R A C T E D
FFS algorithm, the optimal empirical value for selecting the number of neighbour points is 2.33 edges/points.Although FFS is currently considered to be one of the best sampling algorithms, there is still the problem of excessive sampling of high nodes.
By analyzing the FFS algorithm, it can be seen that its time complexity is much higher than the random sampling algorithm based on point selection and edge selection strategies.However, algorithms based on graph topology are difficult to extend to streaming graphs.Evolutionary algorithms are commonly employed to offer decent estimate resolutions to issues that are difficult to address by utilising other approaches.This category contains a large number of optimisation issues.Finding a perfect answer may be too computationally demanding, but occasionally a near-optimal solution suffices.It is employed in ultrawideband transmitters for spectrum sensing and assessment, and it allows digitised transmitters to identify information with less sample points.It is capable of translating the concatenated LPF and forming the filter-bank.It can also identify a signal with a few timedomain sampling while achieving excellent sampling frequency in an ultra-wide spectral region.
The goal of sampling is to obtain a representative subgraph that can better retain the topological properties of the original graph.Therefore, when evaluating the sampling results, it is necessary to examine the degree of similarity between the topological attributes of the sampling subgraph and the original image, that is, it needs to satisfy: Among them, u(S) represents the topological attribute value of the sampling result graph, and u(G) represents the topological attribute value of the original graph.The topological attributes of common graphs and the calculation methods of attribute values are as follows: The average degree is: G) represents the degree of all nodes appearing in the graph.The density is: density = link real link possible (8) Among them, link real represents the number of edges that actually exist in the graph, and link possible represents the maximum number of edges that may exist in the graph.For directed graphs, there are: For undirected graphs, there are: It can be seen from the above definition that when the number of nodes in a graph is constant, there are density / link real .
The diameter is: The diameter of the graph is the maximum value of the shortest path in the graph.
The average clustering coefficient is: Among them, K V i represents the degree of node V i , N V i represents the number of all neighbour points, and C(V i ) represents the clustering coefficient of point V i .Therefore, the average clustering coefficient can be defined as: The degree distribution of nodes is: The degree distribution of nodes is defined as the proportion of nodes with degree k, so The clustering coefficient distribution is: Among them, The clustering coefficient of a certain point is used to express the number of triangles centred on the node v.In the graph, nodes tend to cluster, so the clustering coefficient is a very important metric.

Mechanical and system design of variable rate fertiliser applicator
Variable rate fertilisation (VRF) technique has the potential to enhance fertiliser use efficiency while also lowering environmental effect.The goal of VRF is to apply specialised and precise fertiliser at various sites in order to meet site-specific management needs.Variable Rate Fertilisation is the application of one or more nutrients throughout a field at a rate calculated to influence a certain outcome.Increasing yield is the most appealing consequence since farm revenue directly correlates with production.As shown in the structural design shown in Figure 1, the fertiliser discharging device is mainly composed of a fertiliser box, a driving turntable, a fertiliser guide trough, and a control system.In the design, the horizontal scraper structure is used in the fertilisation technique, and the plough bottom fertilisation method is used in the fertilisation method.The working principle is as follows: The motor 5 provides output torque to the reducer 4, which is connected to the drive turntable 7 after deceleration, and drives the drive turntable to rotate.At this time, the fertiliser above the turntable rotates under the action of torque, and when it encounters the scraper 8, it accumulates on the scraper and keeps accumulating.When it exceeds the outer edge of the turntable, the fertiliser will fall into the fertiliser guide opening 3 and fall into the fertiliser ditch through the fertiliser guide tube 2 and the fertiliser discharge opening 1.
A suitable balance among mechanics motion dependability and variable rate adjustment responsiveness must be considered while building the VRF system.The effectiveness of measuring the current in fertiliser application rate was measured by the reaction time of variable rate adjustment.This system loads a prescription map to realise fertilisation control, and the working principle of the control system is shown in Figure 2.After the fertiliser applicator enters the working plot, the main program of the system first selects plots based on the prescription map information stored in the CF card and the location information fed back by the GPS receiver.After selecting the plot, the system can enter the fertilisation state.After the system reads in the signal of the speed sensor and the position information of the GPS receiver, it judges the current position and speed of the fertiliser applicator, and refers to the corrected working distance to calculate the current operating unit grid of the fertiliser applicator.A GPS receiver determines its location by accurately synchronising signals transmitted by GPS satellites orbiting the Earth.Every satellite is constantly sending signals which include the time the signal was sent, as well as, the satellite's location at the moment of transmission process.Then, the system queries the fertilisation amount of the grid according to the prescription map information.Finally, the system calculates the required stepper motor revolutions through the fertilisation formula, converts the revolutions into variable frequency pulses to drive the stepper motor to rotate, and controls the fertilising shaft to achieve variable fertilisation.Figure 2. Block diagram of control system working principle.

R E T R A C T E D
The differential positioning system (as shown in Figure 3) is used to eliminate various uncertain factors that cause errors.A more sophisticated version of GPS navigation that gives more precise location than ordinary GPS.DGPS removes all systematic error in spacecraft distances, allowing for very precise location determination.A receiver placed in a known position compares the position information obtained from the received satellite signal with its real position to obtain differential correction data.The base station sends these data to nearby receivers in unknown locations to obtain corrected location information.Differential positioning can cover a range of more than ten kilometres through ground base stations, or a range of more than 1,000 kilometres through advanced multiple base stations combined with communication satellites.The accuracy of a differential positioning system is generally a few metres in the horizontal plane, and the closer it is to the base station, the higher the accuracy.
The positioning technology refers to the technology of judging the location of the vehicle by calculating the cumulative distance of the vehicle's moving direction when the initial conditions are known.Dead reckoning is one of the relative positioning technologies.The definition of dead reckoning is that for an object moving in two-dimensional space, starting from a known starting position, when the displacement at all previous moments is grasped, the position of the vehicle at the next moment can be calculated by accumulating the vector of the heading, speed and sailing time on the initial position.Dead-reckoning is an identity navigating approach in which information, generally from sensing devices in the context of PDR, are utilised to refresh an entity's location as well as orientation provided a starting speed, location besides direction.Dead reckoning is based on detecting wind drift via measurements else estimations and projecting aircraft movement depending on direction as well as velocity.An old nautical phrase that describes navigating is named as dead reckoning while utilising the vehicle's actual form, direction of motion (velocity and acceleration), but also how prolonged that movement has been sustained to estimate the vehicle's present role.

R E T R A C T E D
The dead reckoning system is actually a simplified inertial navigation system, which relies on inertial sensors to pick up the vehicle's displacement and heading information.It provides high accuracy location by calculating the actual state using input from multiple sensors (gyro sensor, accelerometre, speed pulse, etc.), although when GPS/GNSS only placement is problematic else unavailable.In vehicle infotainment devices, the Dead Reckoning approach is frequently utilised.The principle is shown in Figure 4.
The adaptive federated filter structure is adopted to solve the influence of equipment error on the system position estimation, so as to improve the overall filtering accuracy of the system.The filter adopted is shown in Figure 5.
The mixed variable fertiliser applicator is mainly composed of a mechanical part and an electrical part.The main body of the fertiliser applicator is mainly composed of a mechanical structure, a mixed variable fertilisation control system, an external tank wheel fertiliser discharging device, and a mixing system.The mechanical structure plays a supporting role in the mixed variable fertiliser applicator, and the nitrogen fertiliser box, phosphate fertiliser box, and potash fertiliser box are designed to contain fertiliser.The function of the mixing mechanism is to mix the three fertilisers evenly.The outer groove wheel fertiliser discharging device is the variable control actuator in the mixed variable fertiliser applicator system.The fertiliser output of the outer sheave depends on the working opening of

R E T R A C T E D
the outer sheave and the speed of the fertiliser shaft.Therefore, the mixed variable fertiliser applicator achieves the purpose of precise fertilisation mainly by adjusting the opening and rotating speed of the outer sheave.The main flow chart of the overall block diagram of the mixed variable fertiliser applicator is shown in Figure 6.

Experimental analysis of mechanical operation of variable fertiliser applicator based on big data analysis
In the simulated fertiliser experiment, different speed and opening combinations were tested, and the fertiliser discharge per minute is calculated as the simulation test result, which is recorded in Table 1 and Figure 7.This article conducts a fertiliser discharge to find the relationship between the rotation speed of the fertiliser discharge shaft and the amount of the      From the above research, we can see that the variable fertilisation machine based on big data analysis proposed in this paper has a certain effect.

Conclusion
In view of the large amount of chemical fertilisers, excessive fertiliser application, low fertiliser utilisation, low input and output, and the resulting decline in agricultural product indicators, groundwater pollution, and water eutrophication problems that exist in the traditional fertilisation of farmland, this paper weighs the selection of fertiliser discharge mechanism for fertiliser applicators at home and abroad, the selection of power and transmission schemes and the advantages and disadvantages of its control system, and comprehensively considers the characteristics of Xinjiang region and planting mode to design a mixed variable fertiliser applicator.This mixed variable fertiliser applicator solves the problems of excessive partial application and low fertiliser utilisation in traditional fertilisation, as well as the resulting waste, environmental pollution and poor quality of agricultural products.Moreover, this paper designs and manufactures a bivariate test device, and conducted a fertiliser discharge test on the test device.In addition, this paper uses big data technology to analyze the experimental data of variable fertilisation operation machinery.From the experimental research, it can be seen that the effect of the fertilisation operation machinery designed in this paper meets actual needs.

Disclosure statement
No potential conflict of interest was reported by the author(s).

Notes on contributors
Fang Gao is a Master, instructor, working on-College of Vehicle Engineering in Changzhou Vocational Institute of Mechatronic Technolo-gy, China, His research direction: modern agricultural machinery design and theoretical research, more than 10 papers published and 1 book pub-lished.
Hongchang Li is a PhD, associate professor, working onCollege of Vehicle Engineering in Changzhou Vocational Institute of Mechatron-ic Technology, China, His research direction: modern agricultural machinery design and theoret-ical research, more than 20 papers published and 1 book published.

Figure 6 .
Figure 6.Overall block diagram of mixed variable fertiliser applicator.

Figure 8 .
Figure 8. Statistical diagram of of the fertiliser discharge per revolution (opening 12 mm).

Figure
Figure Statistical diagram of the amount of the fertiliser discharge per revolution (opening 32 mm).

Figure 9 .
Figure 9. Statistical diagram of the amount of the fertiliser discharge per revolution (opening 22 mm).

Figure 11 .
Figure 11.Statistical diagram of the amount of the fertiliser discharge per revolution (opening 42 mm).
fertiliser discharge.The opening of the fertiliser discharge shaft is fixed at four opening values of 12, 22, 32 and 42 mm, respectively.In addition, this paper carried out a fertiliser discharge test with a speed range of 40-240 r/min at each opening, and the speed increased by 20 r/min.The test data are recorded in Tables 2-5.
random graph sampling algorithms based on point selection strategies, random graph sampling algorithms based on edge selection strategies, and random graph sampling algorithms based on graph topology.For graph sampling, several sample methods have been suggested.A very well sampling techniques, Breadth-First Sampling (BFS) and Random Walk (RW), have been utilised in a variety of applications.The difference among random graph sampling algorithms based on point selection, edge selection and graph topology are dependent on node selection.So, the NS algorithm is used to select a point uniformly.

Table 1 .
Statistical table of the average fertiliser discharge test.

Table 2 .
Statistical table of the amount of the fertiliser discharge per revolution (opening 12 mm).

Table 3 .
Statistical table of the amount of the fertiliser discharge per revolution (opening 22 mm).

Table 4 .
Statistical table of the amount of the fertiliser discharge per revolution (opening 32 mm).

Table 5 .
Statistical table of the amount of the fertiliser discharge per revolution (opening 42 mm).