Artificial intelligence in farming: Challenges and opportunities for building trust

Artificial intelligence (AI) represents technologies with human-like cognitive abilities to learn, perform, and make decisions. AI in precision agriculture (PA) enables farmers and farm managers to deploy highly targeted and precise farming practices based on site-specific agroclimatic field measurements. The foundational and applied development of AI has matured considerably over the last 30 years. The time is now right to engage seriously with the ethics and responsible practice of AI for the well-being of farmers and farm managers. In this paper, we identify and discuss both challenges and opportunities for improving farmers’ trust in those providing AI solutions for PA. We highlight that farmers’ trust can be moderated by how the benefits and risks of AI are perceived, shared, and distributed. We propose four recommendations for improving farmers’ trust. First, AI developers should improve model transparency and explainability. Second, clear responsibility and accountability should be assigned to AI decisions. Third, concerns about the fairness of AI need to be overcome to improve human-machine partnerships in agriculture. Finally, regulation

form of robotic structures (such as autonomous cars, robotic hands, humanoid robots, etc.).In agriculture, the use of AI is envisioned as "delivering real value" to farmers (Smith, 2018) and steering society toward the "fourth industrial revolution" or agriculture 4.0 (Lele & Goswami, 2017) via smart farming methods and decision-making tools.Precision agriculture (PA) is one such "data-driven strategy" to improve soil and resource management plans, and manage crops and livestock (Botta et al., 2022, p. 831).PA integrates information technology into farm machinery and farm management using innovations such as satellites, drones, sensors, and AI systems to help farmers make site-specific and timely decisions (Rossel & Bouma, 2016;Klerkx & Rose, 2020).Broadly, PA performs four major functions for farmers and their crop fields: independent navigation within the field, sensing fieldbased changes, mapping and reporting of collected data, and making suggestions on management zones within the farm (Botta et al., 2022).
ML applications such as computer vision and time-series analysis are being used to enable farmers to forecast and predict the optimal time to seed, harvest and market their crops (Ryan, 2022;Tantalaki et al., 2019).AI-based systems can increase farm profitability and reduce negative impacts on the environment (Banerjee et al., 2013;Rossel & Bouma, 2016;Smith, 2018).However, in contrast to the merits, several disruptive effects of AI-enabled PA have been noted (Carolan, 2022;Ogunyiola & Gardezi, 2022).These disruptive effects include that many farmers are concerned about who will own and be able to share farm data, how information about their farming practices will be stored, secured, and how the benefits of PA will be shared between farmers and agricultural technology (agritech) corporations (Dara et al., 2022;Jakku et al., 2019).Such challenges are inextricably related to the broader ethical challenges of AI, ranging from poor explainability to lack of model transparency (Dara et al., 2022), exacerbating existing ethical concerns about data ownership and generating distrust in future AI solutions.Farmers need to be able to trust AI providers that these technologies will achieve their envisioned goals of environmental and economic sustainability in PA (Gardezi & Stock, 2021;Gardezi & Bronson, 2019).This forum paper identifies and discusses both challenges and opportunities for improving farmers' trust in AI providers toward the aims of PA.This discussion includes recommendations for the responsible innovation of AI in agriculture to harness the potential benefits of AI and minimize the associated social, economic, and ethical risks to farmers and the natural environment.

Core Ideas
• Model transparency and explainability can help foster trust between farmers and those providing artificial intelligence (AI) solutions.• Assigning clear responsibility and accountability to AI decisions can improve farmers' acceptance and use of these technologies.• Development of fair and equitable AI can improve human-machine partnerships in agriculture.• Regulation or voluntary compliance with data ownership, privacy, and security is needed if AI systems are to be used by farmers.

2.1
Artificial intelligence models can be adaptive and flexible, yet also opaque and complex ML is a subtype of AI, and DL is a subtype of ML.As described in Figure 1, ML algorithms aim to improve accuracy by changing the weights of model variables without following explicit instructions (i.e., learn from example data) to make classification, regression, clustering, and rankingfor example, decision trees (DTs), random forests (RFs), and support vector machines (SVMs).In general, machine learning models are tasked with finding a function that is as compact as possible and can minimize the prediction error.Deep learning's original idea was presented in 1943 (McCulloch & Pitts, 1943) for creating a model for a single biological neuron and then the possibility of linking individual neurons together to form an artificial neural network (ANN).The neural model described by McCulloch and Pitts (1943) has a set of inputs (analogous to dendrites in a biological neuron) that receive signals from other neurons.These signals travel to the cell body (Soma) and get aggregated in some way (e.g., a simple weighted summation).In turn, this sum gets passed through some function-usually nonlinear (e.g., binary, sigmoidal) to produce an output signal.perceptron) had only two layers (input and output layers).This later evolved to three layers (one input layer, one hidden layer, and one output layer) to circumvent the challenges posed by Minsky and Papert (1969).Eventually, the three-layer feedforward backpropagation ANN came into being and dominated the field for nearly two decades.
The main components in the ANN are made of the input layer, the output layer, and each neuron/node perform a computational rule, with the output of each node in a layer then passing to the node in the next layer using some weighted sums.In recent years, ANN have been superseded by deep neural networks (DNN), comprising large number of layers, which in conjunction with large data sets, may use algorithms such as backpropagation to optimize parameters in a single layer based on the previous layer to separate signal from noise in data (LeCun et al., 2015).Once the input and output structure of a DNN has been set, the number of hyperparameters (e.g., learning rates, training epochs, image processing parameters, batch sizes, number of layers, convolutional filters, and kernels, etc.) that need to be tuned is unwieldly, leading to the invention of optimization methods for navigating these trial-and-error parameter selections.Although deep learning algorithms have made impressive progress in image, video, and speech processing, object recognition, and deep data interpretation, such large numbers of layers in these models and many model parameters that need to be tuned during the training and testing phases create a hidden and ambiguous "black box".
Within ML, techniques have varied in terms of how transparent the models can be.Transparency is an attribute of AI, when it is "clear to an external observer how the system's outcome was produced, and the decisions/predictions/classifications are traceable to the properties involved" (Varona & Suarez, 2022, p. 10).ANNs have been most widely used in remote sensing applications in agriculture.They are more adaptive and flexible than standard linear regression models.Nonetheless, they are opaque and computationally expensive, so much so that the computational process may increase exponentially with higher levels of complexity.ANNs require time-consuming parameter tuning approaches and are extremely data intensive.The higher complexity and computational cost of certain ANNs have led to the development of alternative solutions, including easier-to-train algorithms, such as support vector machines (SVMs), decision trees (DTs), and random forests (RFs).Such models present a potential for improving the efficiency of PA applications (Chlingaryan et al., 2018).Other advanced machine learning approaches like adaptive neuro-fuzzy inference systems (ANFIS) and extreme learning machines (ELMs) have better generalizability; however, they resemble black boxes that need more validation to ensure credibility (Tantalaki et al., 2019).

Model transparency and explainability can foster farmers' trust
According to Doran et al. (2017, p. 1): "To achieve complete trustworthiness and an evaluation of the ethical and moral standards of a machine, detailed "explanations" of AI decisions seem necessary.Such explanations should provide insights into the rationale the AI uses to draw a conclusion".Explainability or interpretability refers to the propensity of humans to understand the results of AI algorithms (Slack et al., 2019).Explainability is when the decisions/predictions/classifications produced by the [AI] systems can be justified with an explanation that is easy to be understood by humans while being also meaningful to the end-user Varona and Suarez (2022) p. 11.Explainability can be used for improving the credibility of the models and giving agency to farmers.For example, if farmers think that the decision made by an AI tool to assess carbon dioxide or other greenhouse gas (GHG) emissions on their farm is unfair or inaccurate, then they should be able to contest such assessments, with the AI provider, if the models are explainable (Dara et al., 2022).
As a field in computer science, explainable AI (XAI) "aims to make AI systems results more understandable to humans" (Adadi & Berrada 2018, p. 52139).Defense Advanced T A B L E 1 Artificial intelligence (AI)-based opportunities in agriculture (adapted from Ryan et al., 2023).

AI-based opportunities Description
Agronomic decisions Implementation of AI in farming decisions such as soil management, pest and weed management, disease management, crop management, and water-use optimization.Research Projects Agency (DARPA) defines XAI as explainable models that can maintain "a high level of learning performance (prediction accuracy), and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners" (Gunning, n.d).There are at least four levels of explainable AI: opaque systems that provide no reasoning into its algorithmic architecture; interpretable systems that offer opportunities for mathematically analyzing its algorithmic mechanisms; comprehensive systems that rely on symbols (text and visualizations) to allow users to form explanations of how algorithm concludes, and explainable systems that focuses on requiring minimal human post processes and automated reasoning to craft clear explanations of algorithmic mechanisms and outcomes (Doran et al., 2017).In situations where explainability is very important, explainable AI systems can help farmers understand how AI arrives at the decision that it made (Jobin et al., 2019).For example, AI-powered decision support systems can create opportunities for farmers to make better management decisions (refer to Table 1 for the advantages of AI in agriculture).It is therefore important to create AI solutions that are understandable and interpretable to the farming user (Kök et al., 2022, pp. 10-11).However, even XAI with minimal human post processes will require some form of training to be provided to farmers to help ensure that explanations of algorithmic mechanisms can be understood.Thus, while XAI can open opportunities for improving the explainability of the models, it is equally important for AI developers to think about farmers' training needs as they begin to engage with XAI.

Assigning clear responsibility and accountability to Artificial intelligence decisions can improve users' trust
In the use of autonomous farm equipment and AI-based decision support systems, there are situations where it can be unclear who is responsible for potential accidents, and unwanted or unreliable AI/ML predictions.For example, Dara et al. ( 2022) describe an AI system application that is designed to maximize yield and reduce costs for farmers by optimizing the quantity of fungicides to be applied to fresh produce.The timing of fungicide is critical to ensure that no residue remains on the fresh produce at the time of harvesting.However, while the AI system can identify the specific time and quantity of fungicide use, whether or not there are traces of fungicides on crops depends largely on farmers being able to follow the recommendations at the right time.If farmers were to make a financial or reputational loss, they may be concerned that there is no one organization or individual to seek help or reparation from despite following the AI-based recommendations.
While similar issues may arise as a result of receiving advice from a crop consultant or a company providing the service, there is a concern that responsibility identification may become more problematic.Farmers can have direct contact and interaction with a crop consultant or company providing such recommendations, but if the recommendations come directly from an AI system (through their smartphone or tablet), they may misinterpret the information, will have less (or no) opportunity to ask questions or advice, about the recommendations.There is also a growing concern in the industry that farmers elevate the scientific validity of AI recommendations, regardless of the advice being given (Ryan, 2020).
There are several challenges associated with assigning responsibility to an AI system.AI systems are designed by humans to serve human ends or make decisions on their behalf, and they are not advanced enough to think about what is morally right or wrong.As a result, there will likely be a "Moral Proxy Problem" (Thoma, 2022, p. 52).The moral proxy problem arises because the AI system is following values about morality that are aligned with those of humans.In such cases, technology developers and regulators are debating whether humans and organizations ought to be held accountable for decisions made by AI (Thoma, 2022).
It is also difficult to assign responsibility to AI systems because the processes of innovation are distributed or divided between numerous individuals or organizations that only produce a small component of the larger functioning technology (Jirotka et al., 2017).There is the "problem of many hands" (Johnson, 2012), which can quickly devolve into a problem of "organized irresponsibility", insofar as there can be uncertainty about who is responsible for addressing the unintended consequences of AI.At present, AI and big data are being implemented within agricultural systems for capturing, storing, transferring, transforming, analyzing, and marketing data (Wolfert et al., 2017, p. 75).In situations where farmers do not know who will be responsible for AI's harmful actions, such as issues of "transparency, privacy, fairness, and accountability" (Koliba & Spett, 2023, p. 2), their trust in AI providers, and their use of AI solutions, may weaken.The existing legal agreements between farmers and agricultural technology corporations do not appropriately establish the responsibilities and rights of people and organizations involved in the development, maintenance, and use of AI.As such, it becomes impossible "to identify who will be accountable for errors, financial or reputational losses" (Dara et al., 2022 p. 4).
One of the significant aspects that differentiate responsibility allocation when harm is caused as a result of AI, as opposed to consultants or agribusinesses, is the complexity of finding out who is responsible for the error or incorrect recommendation provided.While in traditional business models, there is a specific identifiable agent or organization that provides a solution or recommendation to the farmer, and it is clear who is responsible when there is a fault or issue on the farm resulting from this advice.However, recommendations from AI systems often consist of many different companies, processes, and technologies, that are integrated together to provide the recommendations (Ryan et al., 2021).Because of the black-box nature of many AI solutions, it is also very difficult to pinpoint which processes and aspects of the AI were the cause of the harm.For example, was it the natural language processing, computer vision, robotics, or some other aspect that caused the issue?If all of these individual components are outsourced, should liability be with the companies that provided these services, the company that built the technology containing the AI (e.g., a traditional tractor company with modified AI aspects), or the company that sold or leased the technology to the farmer?While the use of AI does not absolve responsibility and liability allocation, it certainly complicates it.
Several propositions have been made to hold AI developers and AI systems accountable and responsible for their outcomes.The European Union (EU) differentiates between high-and low-risk AI when recommending its regulation.Risk is defined as an unwanted event or experience that may or may not be realized; it is an "unwanted hypothetical future event" (Schmidt & Voeneky, 2022, p. 127).High risks are those events that "have the potential to cause major damages for protected individual values and rights (as life and bodily integrity) or common goods (as the environment or the financial stability of a State) (Schmidt & Voeneky, 2022, p. 128)."The EU Commission's proposal for regulating high-risk AI systems was recently presented as the "Draft EU Artificial Intelligence Act (AIA)" in 2021.According to this proposal, AI systems were considered high-risk if they implicated human rights, such as AI systems designed for biometric information and identification of people, evaluating the creditworthiness of people, or being used by the criminal justice system for sentencing and parole.The possibility of AI causing major damage to democratic values is at the center of defining AI as being high-risk.And therefore, includes technologies such as autonomous cars, planes, drones, and certain AI-driven medical products (such as brain-computer interfaces) and AI-based stock trading systems (Bostrom, 2012).Unfortunately, there was very little mention of the risks of AI being used in the agricultural sector within the EU AI proposal, as many of the harms associated with these forms of AI are associated with non-human harm (e.g., the environment, livestock, wildlife around the farm), economic harm (harmful AI actions on the farm will mostly impact crop yield and economic benefits of the farmer), or the physical and rights harms to humans is seen as low risk (there is a much lower chance of human harm caused by autonomous farming vehicles than self-driving commercial vehicles being used on the road, for example).
There have been other proposals for assigning increasing responsibility to the AI developer for mitigating potential future risks.Some scholars have argued that similar to other risks, such as those emanating from financial risks to people from the banking system, AI risks can be diminished if companies are liable to pay a "proportionate amount of money into a fund as a guarantee after developing the product or service but before market entry" (Schmidt & Voeneky, 2022, p. 124).This would be important to supplement potential harms and liability to customers that are plausible, but not entirely predictive.The payment will be based proportionally on whether the AI products are classified as high-or low-risk.While this insurance proposal could potentially protect farmers from future risks, its implementation can be challenging from both administrative and strategic perspectives.The question of liability is akin to other technologies in agricultural systems (e.g., farmers can purchase insurance for weather events and planters/sprayers).However, the difference lies in the absence of liability regime in the case of AI contributing to high uncertainties about the future implications of AI.Such insurance programs may eventually increase the cost of AI systems to the users and decrease AI's return on investment (ROI) for farmers.
Beyond the EU, there does not exist an international treaty regulating AI systems and services.Presently, there is substantial interest in each country to protect its business interest, but AI regulation has not kept up with the speed of AI innovation.This is different from other areas of international regulations, such as in the area of biotechnology, which is regulated internationally by the Cartagena Protocol (https:// bch.cbd.int/protocol/), which 170 countries have ratified.For AI, international treaties are superseded by international soft law.For example, there have been guidelines for ethical AI, including the "Montréal Declaration for a Responsible Development of Artificial Intelligence 2018" (Fjeld, et al., 2019; Université de Montréal, 2018) that 1) encourage secure, reliable and robust AI systems, where the users have control over their data, and 2) enhance human abilities through responsible workforce augmentation can achieve trustworthy AI (Floridi, 2019).Other international soft laws, such as the Organization for Economic Co-operation and Development (OECD) AI Recommendations focus on value-based principles of AI, including inclusive growth, fairness, transparency and explainability, robustness and safety, and accountability.The recommendations are suggestive (e.g., the wording of the soft law is "should respect"), and do not mention any legal liability or responsibility on the country or the companies producing AI (OECD AI Recommendations, 2019).
In North America, Canada has taken its first step toward regulating artificial intelligence.In June 2022, the Canadian government tabled Bill C-27, the Digital Charter Implementation Act (Parliament of Canada, 2022), which includes the Artificial Intelligence and Data Act (AIDA) (Parliament of Canada, 2022).This is Canada's first attempt to regulate AI systems outside privacy legislation.After years of growing calls to regulate AI, AIDA is an important and encouraging first step.But it requires further consideration to provide adequate oversight, accountability, and human-rights protections that would elevate it to international precedents in this space (Tessono & Solomun, 2022).In the United States, the White House Office of Science and Technology Policy has recently published its first "Blueprint for an AI Bill of Rights," aimed to urge companies to design and deploy AI systems that uphold democratic values and protect civil rights (OSTP, 2022).The increasing interest in regulation highlights the importance of focusing on not only developing new AI technologies but also building policymakers' abilities to design effective regulatory frameworks that can properly govern the societal implications of big data and AI in agriculture.
One of the reasons for this lack of hard law on AI is that there are fears that it will cause overregulation, stifling AI development and the opportunities and benefits that it may bring.Some have argued that this is also very significant in the agricultural industry, as well: "if agricultural AI development only focuses on the ethical issues and challenges involved with the technology, it may overlook the technological developments and economic benefits that it can bring, impeding it from being developed altogether" (Ryan et al., 2023).In response to this concern, Ryan et al. (2023) propose the need to take an interdisciplinary approach to AI development and use in the agricultural sector, bringing together a wide array of disciplines to ensure that appropriate, fair, and sensible, policy is implemented in the sector.While this is certainly not a silver bullet, it may provide a more balanced approach between under, and over, regulation.

Overcoming harmful bias and pursuing fairness and inclusiveness in AI solutions can improve human-machine partnerships in agriculture
An AI-system is as useful as the assumptions made during its algorithmic development.While AI may be expected to correctly predict situations that it has seen/experienced before, it cannot be expected to make accurate predictions for situations that are novel, unexpected, or unprecedented.Bias is the "tendency to learn a preferred pattern of data rather than learn from the actual data distribution when the model is built" (Dara et al., 2022, p. 3).If there are biased assumptions made during the process of algorithmic construction or if the data are biased, then the constructed AI will encounter bias too.Bias is a required component in AI, however, the issue that AI developers need to focus on is harmful bias, or unintended bias.
At a basic level, bias is inherently present in the world around us and encoded into our society, and can be introduced at any stage in the model development process (Reagan, 2023).Many applications that use AI suffer from harmful bias (Brun & Meliou, 2018;O'Neil, 2016;Qian et al., 2021).Problems include biased or unrepresentative datasets used to train models; use of biased models to adversely harm stakeholders, especially from vulnerable or underrepresented social groups, for example in crime prediction, policing, transport, insurance pricing models etc.In similar vein, for agriculture, Dara et al. (2022) demonstrate bias through an example of an autonomous apple picking application that is built to collect ripe apples but it is trained on data from red-type apples only.This type of model will classify ripe green apples as raw.Biasing of results under re-training with non-representative data; narrow applicability of models assuming users, plants, and usage contexts that are non-inclusive, and failure of models to recognize crops of different characteristics (e.g., red vs green apples) are potential examples of AI-related bias in agriculture.Bias and fairness in AI are two sides of the same coin.Fairness includes the principle of ensuring that the decisions of AI do not discriminate or run a gender, race, sexual orientation, or religion bias toward a group or individuals.Previous research has shown that inclusiveness in AI is becoming increasingly critical to narrow the digital divide in agriculture, so that there are fewer barriers that could unintentionally exclude farmers from accessing AI solutions.For example, recent developments in PA are very much centered on producing farming recommendations (regarding irrigation, seeds, nutrients, and harvesting schedule) for a few commodity crops that are grown on medium to largesized farms (Stock & Gardezi, 2021).Agritech corporations are often training algorithms on available genotype information that is skewed toward fewer commodity crops and developing models that recommend nutrient applications that require farmers to purchase expensive equipment such as planters, sprayers, harvesters, yield monitors, and soil and moisture sensors (Bronson et al., 2021).This is somewhat similar to previous technological revolutions in agriculture.Comparable to the focus on genetically engineered seed systems, currently available PA tools are predominantly designed for commodity crops and conventional agriculture systems (Stock & Gardezi, 2021).Because AI is being taught on data derived from conventional and larger-scale farming systems, the potential bias in data collection could make PA tools workable in contexts that only suit large-scale conventional agriculture operations.Small and ecologically diverse farms remain underserved by research on PA (Bronson, 2019).There are concerns about whether or not AI tools in agriculture can be developed to be effective for small-scale agriculture, and whether the AI-based recommendations will be effective for farmers that are not growing commodity crops and practicing regenerative and agroecological farming techniques (Ditzler & Driessen, 2022).
Inclusiveness in AI is necessary if PA tools are to be relevant and affordable for small and ecologically diverse farms.Farmers would be more likely to use AI if it simultaneously promotes values and morals that are synchronized with the demands of the diverse rural agrarian populations.Tantalaki et al. (2019, p. 25) stated that "Several big data applications seem to be suited to large farms and industries (Climate Corp and Monsanto) that already use data in their decisionmaking and have access to data captured from machinery; greater access to capital, and resources.Smaller intercropped fields may require more manual labor and less mechanized processes.Big Data could potentially be very useful for non-industrial farming practices, but emerging moral and ethical questions about access, cost, and support should be addressed to realize this benefit" (p.25).However, private sector research and development are often focused on the greatest market return, that is, the development of AI and new equipment is predominantly designed and developed for com-modity crops grown on large-scale conventional agriculture systems (Bronson & Sengers, 2022;Bronson & Knezevic, 2016;Stock & Gardezi, 2021).AI and new PA equipment are often not appropriate for small and mid-sized farms attempting to mimic natural systems and growing food for local markets.As a consequence of the private sector investments, the benefits of AI are currently available only to a handful of farmers, those that produce a few commodity crops on large acreage.According to White et al. (2021, p. 312): "special attention should be paid to the Big Data needs of regional and small crop farmers who may not receive the same level of access to Big Data (e.g., remote sensing data) or data processing that large farmers of major crops receive."The benefits of AI in PA for a broad socioeconomic spectrum of farmers can be better actualized by addressing concerns about technological equity, digital literacy, and ethics in its research and development.Land grant institutions, universities, government agencies, and non-governmental organizations can play a vital role in developing AI for small farmers and non-row crop producers, such as perennial crops like almonds, oranges etc., to reduce existing operational barriers to accessing and owning AI-enabled PA (UNDP, 2021).

Rethinking how agricultural data is owned, controlled, and managed is needed to build farmers' trust
There is increasing tension in food and agricultural systems around farm data ownership, privacy, and security (Raturi et al., 2022;Rotz et al., 2019).Agricultural big data is diverse.Data are collected to measure crop growth, soil, climate and weather characteristics, and farm profitability metrics.Most new proprietary PA farm equipment (e.g., combine harvesters) comes pre-installed with sensors that passively collect on-farm agroclimatic data about the crop yield, soil, and weather conditions, and transmit that information to private and public sector researchers, who then use AI to draw recommendations and predictions for farmers, about practices such as irrigation schedules, nutrient management, and seeding plans (Fielke et al., 2020).From the farmers' perspective, there is a paradox in the relationship with farm data: farmers want to be able to preserve their sensitive information and own the farm data, yet at the same time would like to benefit by sharing information with agritech corporations and university researchers who can then guide their decision-making process (Sykuta, 2016).Farmers are also concerned about the data used against them in insurance claims and enforcing regulation (White et al., 2021).
Recent social science scholarship has highlighted the need to think and reflect on the role of agritech corporations in increasing their market power to the detriment of farmers' well-being.Accumulation of large datasets by agritech corporations has been identified to be problematic for farmers.Agritech corporations such as Deere (https://www.deere.com/en/) and Bayer (https://www.bayer.com/en/)are not only selling digital advice on AI-based platforms such as Field-View, but also inputs to farmers in the form of seeds and equipment (Stock & Gardezi, 2021).Future research needs to examine how current arrangements (e.g., concerns about liability, explainability, and accountability) between farmers and the agritech corporations will change.There is a risk that the corporations could utilize big data generated from farms to recommend new products to sell and profit from this asymmetric relationship.The data and regulations that protect the corporations' intellectual property rights also tie farmers to specific agritech corporations.If the farmer chose to change to a different technology provider, they risk breaching their contract or may have to forgo the data they have collected on their farm using proprietary machines (Sykuta, 2016).Some agritech corporations have "tight legislative control over their intellectual property and data analytics and if a farmer breaches their contract, this may lead to penalties and/or court-cases against them" (Ryan, 2019, p. 9).Agritech corporations depend on complex legal contracts, such as end-user license agreements (EULA) to retain the rights to farm data produced by the proprietary farm equipment (Kamilaris et al., 2017;Stock & Gardezi, 2022).These EULAs enforce the terms of engagement between farmers and agritech providers (Wiseman et al., 2019).However, existing EULAs are not explicitly revealing the scope of data collection, storage, and processes involved in transforming farm data at the point of acquiring these emerging PA technologies by farmers.Similarly, agritech EULAs are presently conceived as unfair as they leave no room for farmers to negotiate their rights on how their farm data should be utilized by the agritech corporations (Carbonell, 2016).These contractual obligations create a risk of power imbalance and manipulation of farm data, as ownership and control are primarily in the hands of the agritech corporations (Carbonell, 2016;Fraser, 2019;Stock & Gardezi, 2022;Wolfert et al., 2017).Such situations can make technology and service providers disproportionately more powerful than farmers, as they can control the data and the models (Ryan, 2020).There is a risk that this power can be used to control the market or used for selling farmers' data to third parties, such as advertisers, or selling farmers more farm input products (Fraser, 2019;Ryan, 2020).
One way to level the playing field for farmers is through regulation and/or voluntary compliance for big agricultural data ownership, privacy, and security.While data collected by third-parties to make their own algorithms is not the result of AI, more transparent and farmer-centered data ownership practices can improve farmers' trust in AI solution.In the US, most current regulatory options for user data privacy come under the purview of the Federal Trade Commission (FTC) framework.The FTC framework does not regulate and protect against privacy breaches associated with farm data (e.g., weather, soil, nutrients) because this is categorized as non-personalized agricultural data (Atik & Martens, 2021).To fill this institutional vacuum, private sector corporations, government entities, and universities are developing (or have developed) their regulatory protocols for improving users' data privacy based on their interpretation of transparency and privacy.Three examples are noteworthy.First, in 2014, the American Farm Bureau Federation (AFBF) working with commodity groups, farmers, and agritech corporations, helped to establish the Privacy and Security Principles for Farm Data.The main purpose of these data-sharing principles was to establish a code of practices to establish some form of trust between farmers and agritech corporations through voluntary principles and codes (van der Burg et al., 2021).Since then, numerous agricultural organizations in the US have agreed to follow the unenforceable and non-binding "Core Principles" from AFBF.Second, several public-private partnerships in North America and Western Europe are advocating for open data platforms for agriculture (European Commission, 2019).An initiative named Global Open Data for Agriculture and Nutrition (GODAN) supports open data platforms for farmers and other agricultural stakeholders.Finally, the EU Code of Conduct for agricultural data sharing provides opportunities for protecting farmers from complicated and non-sovereign EULAs.Building on the EU Code of Conduct, Van der Burg et al. (2021, p. 185) suggest that contracts between farmers and agritech corporations need to perform three functions if they are to foster trust among users; "(a) information is comprehended by the more vulnerable party in this relationship who has to sign the contract, (b) the more powerful partner takes responsibility to provide that information, and (c) information is tailored to the information needs of the party signing the contract, even when data are re-used over a longer period".Such initiatives and codes of practice are important steps toward addressing issues of asymmetry in market control by large agritech corporations.ever-increasing impact on agriculture and society, AI and ML tools are raising grand challenges, including problems with assigning responsibility and accountability, lack of transparency and explainability, issues with fairness, concerns regarding data ownership, privacy, and security (Gardezi et al., 2022).

CONCLUSION
There are several takeaways from this forum paper that can guide the responsible development of AI in PA.First, we see farmer involvement as critical to AI development.This ensures that farmers are central to design, reflecting a growing call in the literature toward farmer-centered design.Farmer involvement can enable activation of their contribution to the design process instead of simply serving as research subjects.Farmers are more likely to adopt PA if they consider AI to be usable, useful, and reliable.Usefulness depends on several factors, including whether the recommendations provided by AI systems are accurate and reliable.But usefulness is also related to the design and usability of the AI system.Better transparency and explainability of the AI-based models can help enhance farmer trust in the new technologies.
Second, a focus on governance and innovation outcomes can not only foster much-needed long-term thinking but also foster inclusion and trust in the development process.Simply put, governance whittles down to how we come to make decisions and organize a social body to do so effectively.This can range from formal (i.e., government or corporate oversight) to informal (i.e., farming collectives).The question of who decides-say, all members of a group or a governing board-is also pressing.Centering farmers in technology design and assigning clear responsibilities and accountability in AI decisions strives to reconnect values, natural environments, and social contexts as starting points of dialogue, with the intended outcome of leveling the playing field between AI developers and farmers' on how knowledge is produced and put into action.
Third, digitalization of farming increases the risks of algorithmic bias which is dependent on the patterns of inclusion in the data on which the models are trained.The predictions made by AI-platforms thrive on the promise of algorithmic objectivity with the potential of translating as institutionalized and legitimized social practice in the future (Gillespie, 2014).In this situation, overcoming concerns about the fairness of AI can pave way for enhancing trust of farmers in the technology.
Fourth, there is a need for both soft compliance and hard laws that can be put in place to prevent abusive behavior and encourage utilization of data and related decision-making systems.Shepherd et al. (2020) wrote that "Exacerbating the lack of trust is a sense that political and legal control of big data is lagging behind technological development, with the perceived risk that control of data will reside with technology providers, rather than farmers as technology users" (p.5087).We agree that the pace of innovation in AI is outpacing policy and regulations to protect human and environmental interests.However, creative and critical thinking can allow us to create incentives and institutions for farmers to clearly engage and participate in decisions about data ownership, privacy and security.New models of social innovation imbued with greater interdisciplinarity and space for dialogues between experts from social sciences and humanities, computer science, plant and animal sciences, and engineering, must be experimented with if AI is to be become trustworthy (Ryan et al., 2023, p. 24).

C O N F L I C T O F I N T E R E S T S T A T E M E N T
The authors declare no conflicts of interest.

R E F E R E N C E S
These individual artificial neurons can then get chained together in layers to produce what is known as an ANN.The first ANN (the 14350645, 0, Downloaded from https://acsess.onlinelibrary.wiley.com/doi/10.1002/agj2.21353 by Wageningen University and Research Bibliotheek, Wiley Online Library on [29/08/2023].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License F I G U R E 1 Machine learning techniques used in agriculture (adapted from Koliba & Spett [2023, p. 4], figure attribution: author's own).

14350645, 0 ,
Downloaded from https://acsess.onlinelibrary.wiley.com/doi/10.1002/agj2.21353 by Wageningen University and Research Bibliotheek, Wiley Online Library on [29/08/2023].See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions)on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License Conceptualization; funding acquisition; investigation; project administration; supervision; writingoriginal draft.Bhavna Joshi: Conceptualization; writingreview and editing.Donna M. Rizzo: Funding acquisition, writing-review and editing.Mark Ryan: Writing-review and editing.Edward Prutzer: Writing-review and editing.Skye Brugler: Writing-review and editing.Ali Dadkhah: Writing-review and editing.A C K N O W L E D G M E N T S This research is based upon work supported by the National Science Foundation under Grant Number 2202706 and 2026431.The authors thank the anonymous reviewers and journal editor for their valuable suggestions to improve the manuscript.Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Economic performanceCost-benefit assessment to improve profits based on local/tacit farming knowledge and recommendation actualized through digital platforms.Predictions and recommendations driven by AI models can help farmers reduce fertilizer overuse, forecast uncertainties such as plant and livestock-based diseases, and monitor soil conditions to prevent yield loss.