Continuous Use Behavior of Knowledge Payment Platform Based on Edge Computing under Mobile Information System

From 2015 to the end of 2016, the Internet set off a wave of payment for knowledge. Pay-for-knowledge platforms such as Fenda, iGet, and Qianliao went online quickly, and platforms such as Himalaya FM, Zhihu, and Qingting FM gathered to launch paid columns. .e number of users increased rapidly, and payment for knowledge was considered to have reached the trend of development. .is article aims to study the introduction of edge computing in the mobile information system into the existence and inevitability of the knowledge payment platform; analyze the advantages, dilemmas, and optimization paths of the knowledge payment platform; and try to provide a theoretical reference for promoting its development. .is article puts forward an explanation of the related content of mobile edge computing and RFID technology overview, using comparative experiment and behavior analysis methods. .e experimental results show that there are 98 people under the age of 18 in the questionnaire, accounting for 19.1% of the total, 201 people aged 18–29, accounting for 39.1% of the total, 142 people aged 30–39, accounting for 27.6% of the total, and 73 people over 40 years old, accounting for 14.2% of the total number. It can be seen from the data that the sample age is mainly concentrated in the 18–29 years old, followed by the 30–39 years old; the sample age is biased towards young people. It has well completed the continuous use behavior analysis of the knowledge payment platform based on edge computing under the mobile information system.


Introduction
Cloud computing has been dynamically promoted due to its low operating cost, strong dynamic scalability, and simple operation and maintenance. My country's cloud computingrelated industries are developing rapidly. In cloud computing architecture, all data are transmitted over the network and stored in the cloud. e main characteristics of cloud computing are being dynamic, flexible, and on demand. Data required for cloud computing are collected by related equipment. Different types of terminals transmit a large amount of collected data to their respective cloud platforms and receive analysis results through a central function. At present, a large number of mature intelligent management cloud platforms have been used, but with the widespread use of distributed intelligent devices, a large amount of data transmission has caused the transmission channel to be blocked.
ere are many difficulties in using cloud computing technology to solve this problem, and cutting-edge computing technology provides an effective way to solve this problem.
With the popularity of the Internet, the explosive growth of content has led to a lack of public attention. Content parties can only attract attention by selling content for free or at low prices and then indirectly selling their attention to a third party for profit. With the gradual deepening of content grading, low-quality content is surplus, high-quality content is scarce, and the value of high-quality content begins to emerge, which eventually forms a basis for charging. Since the end of 2015, the payment of knowledge platforms has increased significantly. By the end of 2016, core payments for more than 10 knowledge platforms or subprojects had been launched. However, in less than two years, the knowledge payment platform and the subcolumns went hand in hand, and industry competition was fierce. e platform is in a state of homogeneity or lending competition, and there are hidden dangers in the quality of the platform. After entering 2017, the number of popular payment knowledge platforms has gradually slowed down. With the weakening of the demographic dividend, the growth of users of payment knowledge platforms has slowed down, and problems such as the influence of Internet celebrities and content entertainment have begun to become prominent. Obviously, it leads to the logical payment of knowledge. Gender is controversial. How far the payment for knowledge can go and whether the payment for knowledge platform can continue to grow has also become a question of self-reflection for the payment for knowledge platform. To further improve the acceptance and use level of the public has become a real dilemma that needs to be solved urgently.
Kazimierski proposed fog computing in 2011 and then made a related definition, attracting more and more scholars to pay attention to and study fog computing. But his research did not propose the principles, systems, and application related content of fog computing [1]. Mata and others studied the use of fog computing to assist mobile applications.
rough the experimental comparison of cloud computing and fog computing, they analyzed and compared the performance of cloud computing and fog computing in mobile application services with high latency requirements.
e research results proved that fog computing can provide a better service. But his research did not propose specific algorithm content, which has limitations [2]. Ismail et al. proposed a PCICC cache strategy based on the characteristics of cache nodes in the literature. But his research did not explain the cloud computing and fog computing sensor network technology and network virtualization technology [3]. e innovations of this paper are as follows: (1) a design of the system hardware architecture is proposed, and the use type of edge computing equipment is selected. (2) e time delay theoretical model in the SD-CEN architecture is introduced, and subtasks are offloaded to each MEC device for distributed parallel computing to become the SD-CEN architecture. (3) Classification and analysis of the presenter qualifications and user types of several popular paid knowledge apps currently on the market are tackled, expressing the practical significance of the article. At the method level, this article cleverly uses the edge computing method to calculate the data collection node to ensure the continuous use of the knowledge payment platform under the mobile information system. Compared with the past, the security has been greatly improved.

The Method of Continuous Use Behavior of Knowledge Payment Platform Based on Edge Computing under Mobile Information System
2.1. System Hardware Architecture Design. In the computer edge system, the data collection of acne equipment is completed by each data collection node [4,5]. Each node basically consists of a terminal device and many external devices [6]. For physical data, data collection is done through external equipment [7]. In the data collection process, all types of equipment will transmit the collected data to the terminal equipment in real time in a predefined format [8].
In order to be compatible with most external devices, the most advanced devices provide various forms of hardware interfaces, such as UART, USB, and wireless transmission interfaces [9]. When there are many sensors sending data to the most advanced equipment [10], whether it is physical data or network data, it must be cleared by the data processing unit inside the device [11,12]. After that, the data are uploaded to the cloud in a predefined format [13,14]. According to the system operation target and application script design, the first is the edge device [15]. Network congestion causes a series of problems such as packet loss and delay [16,17]. Secondly, in order to meet the transmission requirements of various sensors on the market, the device must support various hardware communication interfaces, and finally it is aimed at specific scenarios that can be used [18,19]. e system frame diagram is shown in Figure 1.

Knowledge Payment Platform.
e traditional behavior of paying for knowledge has existed since the rise of education [20]. e knowledge payment referred to in this article is a more direct and spontaneous knowledge transaction relationship between the disseminator and the audience [21,22]. It refers to users who use the knowledge payment platform to share personal knowledge with others to obtain economic benefits, and other users pay for knowledge online through the platform [23]. e knowledge payment platform is an Internet media system that provides a venue for the exchange of knowledge and money. It uses computer technology to support a new relationship of knowledge dissemination, and at the same time, it builds two realistic values of economic benefits and knowledge benefits for both parties [24,25].
Knowledge is divided into subjective knowledge and objective knowledge [26]. Knowledge or thinking in the subjective sense includes mental state, state of consciousness, or the intention of behavior or reaction; knowledge or thinking in the objective sense includes problems, theories, and arguments [27]. Simply put, objective knowledge is knowledge without cognitive objects and has nothing to do with claiming to know or opposing it. ere are currently two types of empirical scientific knowledge. One is to describe, explain, and test conclusions and inductive knowledge in the sense of "natural events", and the other is to conduct a systematic and critical investigation or interpretation of phenomenological experience knowledge in the sense of phenomenon content [28]. e theoretical point of view is that knowledge can be divided into actual knowledge and value knowledge. Real knowledge is a universal law that does not change with discipline, especially natural science knowledge. Value knowledge has the personal trace of the cognitive subject; that is, the cognitive object is the fact. Knowledge is processed based on knowledge such as experience, ideas, and inspiration. Real knowledge and knowledge of value are scientific and not wrong at the current level of social knowledge. e former can help the public understand the natural world more accurately. Compared with the traditional knowledge of K12, the knowledge of the payment knowledge platform is mostly value knowledge, so the role of improving the social adaptability of the public is more obvious. e model diagram of the knowledge payment platform is shown in Figure 2.

Delay
eory Model in SD-CEN Architecture. Dividing tasks are reasonably according to the ratio, and offloading the subtasks to distributed parallel computing of each MEC device has become an urgent problem in the SD-CEN architecture. Modeling is as follows: In order to optimize the service response delay and achieve the goal of minimum service response delay, the construction of the service response delay based on the SD-CFN network architecture can be expressed as follows: (3) e solution of the task allocation coefficient can be transformed into the solution of the vector TA, so (3) is modeled as the following optimization problem: In (4), I represents the search space of the feasible solution TA, which is expressed as follows: Using the firework algorithm to solve the optimization, (4) becomes the firework solving equation. e explosion radius Ai and the number of explosion sparks Si of each firework TAi are calculated as follows: In order to prevent fireworks with high fitness values from generating too many explosion sparks, and fireworks with lower fitness values will not produce too few sparks, the number of explosion sparks produced by each firework needs to be limited, as shown below: where a, b are both constants. e selected z dimensions are calculated as follows: e selected z dimensions form the set zs, and the resulting explosion spark is expressed as follows: Update the value of this dimension according to the following formula:  Mobile Information Systems e k-th dimension of the generated Gaussian mutation spark is calculated as follows: From the current generation of fireworks, explosion sparks and Gaussian mutation sparks, select N fireworks as the new population to enter the next generation, and the fireworks individuals with the smallest fitness value will be definitely selected to enter the next generation. In order to maintain the diversity of the population, the remaining N−1 individuals are selected according to the roulette selection algorithm. e probability is calculated as follows: In the above calculation task offloading strategy based on the firework algorithm, it is necessary to obtain the total calculation tasks, the computing power, and communication resources of the cloud server and the MEC device in advance.
erefore, it is very necessary to introduce a central control node into the network to obtain the information of the entire network. e entire algorithm model introduces the SDN controller as the central node in the cloud-edge computing network architecture. e SDN controller collects the information of all devices, runs the firework algorithm to formulate the optimal computing task offloading strategy, and delivers the strategy to each MEC device through the flow table. Next, start the experiment.

Edge Computing.
Edge computing refers to the use of an open platform that integrates network, computing, storage, and application core capabilities on the side close to the source of things or data to provide the nearest-end services nearby. Its applications are initiated on the edge side to generate faster network service response and meet the basic needs of the industry in real-time business, application intelligence, security, and privacy protection. Edge computing is between physical entities and industrial connections, or at the top of physical entities. While regarding cloud computing, you can still access the historical data of edge computing.
Automation is actually a "control" as the core. Control is based on "signal", while "calculation" is based on data. More meaning is "strategy" and "planning." erefore, it focuses more on "scheduling, optimization, and routing." Just like the dispatching system of the national high-speed rail, every increase of the number of trains will cause the adjustment of the dispatching system, which is based on time and node operations and planning issues. e application of edge computing in the industrial field is more of this type of "computing." Simply put, traditional automatic control is based on signal control, while edge computing can be understood as "information-based control".

Experiment and Analysis of Continuous Use Behavior of Knowledge Payment Platform Based on Edge Computing under Mobile Information System
3.1. Short Length to Adapt to Fragmentation Time. e fragmentation of Internet information leads to fragmentation of audience time and attention. Fragmented learning has become the mainstream. erefore, fragmentation and dissemination of system knowledge are more in line with the audience's learning habits and reduce the effort of learning. e audiences of payment-knowledge platforms are generally busy, and the characteristics of fragmentation are more prominent. Figure 3 shows the duration of the unit paid course.
rough the analysis of the duration of paid courses of different APPs, it is not difficult to see that the main course time of APP1 is concentrated in the daytime and less time on weekends, while the course time of APP2 is mainly concentrated in the evening and weekends. is is better for learners.
e sample data of the system is shown in Figure 4. e annual distribution of the number of articles published on the domestic and foreign knowledge payment platform research are shown in Figure 5.
e scatter diagram of the rotating component matrix is shown in Figure 6. e descriptive statistics of each variable are shown in Figure 7.

Sample Description.
Cultural capital theory believes that cultural consumption behavior is not only a way to embody self-identity, but also a social class reproduction strategy adopted to express one's own social identity and to occupy a more favorable social position and is a channel to achieve self-improvement. erefore, the acquisition of cultural capital is extremely utilitarian, and it is an effort to obtain more material capital, although the "concealment and secrecy" of cultural capital investment may conceal the utilitarian characteristics of its behavior. For the tools used in the actual statistical process, EXCel is generally used to perform statistics and analysis of relevant data. Table 1 is a summary of several categories of paid knowledge.
From the above analysis of the content of Himalaya FM, iGet, and Zhihu Live, it can be seen that the three models use paid subscription, online Q and A, and online lectures as their core knowledge sharing models. In terms of the breadth and depth of content, these three platforms involve all aspects of life and work and have different content biases: the knowledge on these three platforms is both popular and professional, multilevel, and diversified. e content meets the different needs of users, and the trend of verticalization and IPization is obvious. Table 2 is a comparative analysis  table of dissemination content. e comparative analysis of the communication effects of the three platforms is shown in Table 3. e qualifications of Zhihu Live speakers are shown in Table 4.

Node Address.
Compared with wired networks, wireless networks have disadvantages such as weak anti-interference ability. e ZigBee network in the system uses a tree-like network topology that is more convenient to manage. e address of the sensor node in the ZigBee network can be determined by a distributed routing algorithm, and with the continuous addition of nodes, a stable tree network is formed.
e following briefly introduces the formation process of the ZigBee network to realize the formation of the network topology on the mobile terminal. e corresponding relationship list of each node is shown in Table 5.
By looking up the node names in the above table, we can easily obtain the corresponding node address and calculate the top node address corresponding to it by adding the ZigBee tree network to the node address.
Several common network protocols are as follows TCP/IP: in order to realize the interconnection between different networks, the US Department of Defense developed the TCP/IP architecture and protocol from 1977 to 1979. NetBEUI: it is an enhanced version of the NetBIOS protocol, which has been adopted by many operating systems, such as Windows for Workgroup, Win 9x series, and Windows NT. IPX/SPX protocol: it was originally a protocol developed by Novell dedicated to NetWare networks, but it is also very commonly used-most of the games that can be connected support the IPX/SPX protocol, such as StarCraft and Counter-Strike. e last one is the SLIP protocol. e calculation consumption comparison of the four protocols is shown in Table 6. e improved protocol increases the computational consumption required by the user (mobile terminal) and the A and B protocols in the registration phase and the login phase, but it increases the protection of the user's identity ID and avoids the storage of confidential parameters in plain text.
Relevant classifications are made for the samples: gender, age, and education level are classified; gender is classified according to men and women, and age is classified based on 18, 30, and 40 years. e education level is classified according to the dividing line between undergraduate and master. e basic characteristics of the sample are shown in Table 7.
From the perspective of gender distribution, there are 254 males in the questionnaire, accounting for 49.4% of the total, and 260 females, accounting for 50.6% of the total. e ratio of men to women is not much different, almost the same, but the number of women is slightly higher than the number of men.
From        User experience e boutique area has high audio quality, many usage scenarios, and a good community atmosphere e early stage is too entertaining to produce a sense of boredom. After the improvement, the knowledge is enhanced and the community atmosphere is general e live quality is high, the interaction efficiency is high, and the community atmosphere is good

Validity Analysis.
Validity test is mainly used to test whether the accuracy of the sample can effectively reflect the relationship between different variables. Generally speaking, the higher the validity of the measurement is, the more accurately the measurement result can reflect the degree of confirmation of the measurement elements. is article will examine the validity from two aspects: the validity of the content and the validity of the structure. Content validity is also called usage validity, empirical validity, surface validity, or logical validity. e main statistical questionnaire data do not cover all the content that the researcher wants to study. Usually the score of a single object is used to reflect all the correlation coefficients of the total score of the object. If the correlation coefficient is not significant, it means that the explanatory power of the item is low, and the item must be eliminated. Since most of the latent variables in this study refer to the existing foreign literature, citing more mature scales, combined with expert evaluation and user in-depth interviews, the final scale is obtained after analysis, so its validity is effective.
Structure validity is an important indicator to judge whether a scale can measure variables. e text mainly uses convergence validity and discretion to test validity. Convergence validity is used to measure the degree of correlation between measurement elements of the same variable. If the correlation is high, the convergence validity is good. Instead, the measurement data must be corrected, usually using a load factor loader for testing. Discrete validity, also known as discrete validity, is used to measure the degree of correlation of measured data between different variables. e correlation between the measurement elements must be very low. Generally speaking, the average output variable of each latent variable on the measurement scale should be greater than the latent variable. rough the common variance of the variable and other latent variables or the average value of the correlation coefficient, the measurement scale has good resolution. Before analyzing the sample data, KMO test and Bartlett ball test will be performed on the collected sample data to confirm whether the sample is suitable for exploratory factor analysis. e value of KMO statistic is between 0 and 1. e closer the KMO value to 1, the stronger the correlation between the variables, and the more suitable the initial variables for factor analysis. Conversely, the closer the KMO value to 0, the weaker the correlation between the variable and the initial variable, which is not suitable for factor analysis.

Overview of RFID Technology.
Perception technology can also be called information collection technology, which is the basis for the realization of the Internet of ings. Currently, information collection mainly uses RFID tags and    (such as  temperature sensors, sound sensors, vibration sensors, and  pressure sensors), GPS, and cameras, as well as complete data perception, identification, collection, and collection of information and control facilities for IoT applications. e perception technology used in mobile care systems is RFID (Radio Frequency Identification), which is a communication technology that does not use visual or mechanical contact to identify the system and a specific target. e radio signal can be converted into an electromagnetic radio frequency field, and the data can be sent to the object through the tag to identify and monitor the marked object. According to the electromagnetic field emitted by the ID, certain tags can be activated during identification without the need for an additional battery. Some tags have their own power source and are modulated in a radio frequency electromagnetic field (which emits automatic radio waves). Tags store data electronically and can be identified within a few meters or first marked. e difference between barcodes is that the radio frequency technology does not need to be on the surface of the ID, and it can also be integrated into the object being monitored. e signal emitted by the radio can be controlled by the electromagnetic field of the corresponding frequency, and the tag on the object transmits some data, which can be detected and identified. ID's electromagnetic field will make some tags get power without using another battery; some tags have their own power source and can automatically send radio signals. e tag mainly stores the data of the electronic storage device and can identify or send the first signal within a few meters. Unlike barcodes, radio frequency technology does not necessarily have to appear on the surface of the identifier, but can also be integrated into the object being monitored. According to the technology and structure used, readers can be divided into reading or reading/writing devices, which play the role of controlling and processing information in the RFID system. e reader usually consists of a transceiver, a connector, an interface, and a control unit. e implant and the reader usually use semi-two-way communication to exchange data. In actual operation, functions such as data collection and management such as object recognition and long-distance transmission can also be completed through Ethernet or WLAN. e information carrier of the RFID system is a transponder. When the reader sends the information, it first encodes the information, then selects the signal of a specific frequency to load, and then uses the antenna for external transmission. Such a pulse signal can be received from the electronic tag of the reader, formatted, decoded, and decrypted by a specific circuit on the chip, then judging what kind of command request is based on the result.

e Impact of Edge Computing on Customers' Continued
Choice to Use Knowledge-Based Payment Platforms. In this scenario, the overall architecture of the edge computing system is divided into three parts: the edge device is used as the collection module, using web crawler technology, combined with the web code download module in the Python environment to complete data collection; the data processing module uses data processing and data integration related tools to send these data to the cloud computing center; the data storage module stores the data in the prereserved data tables through the MySQL database system. Among them, the data processing can be done by the edge device, or it can be processed by the cloud computing center after the data are uploaded to the cloud. Here, in order to maximize the advantages of the edge device, the edge device is selected to complete the data processing while capturing the data and then send the data to the cloud.
After the edge computing-based system is completed, the user's satisfaction will be analyzed in the form of a questionnaire and the variable characteristics are statistically analyzed.
e specific questionnaire recovery situation is shown in Table 8.
In order to explore the faster acquisition of platform information and improve users' satisfaction with the platform and to promote users to continue to use the knowledge payment platform, the content of variable characteristics is designed. e sample size required by each statistical method is different, and the study of structural equation models also needs to pay attention to the problem of this capacity. Maximum likelihood estimation is the most commonly used method in structural equation models. Generally, the number of samples is 100 as the lowest limit. A total of 514 questionnaires were collected in this article, far exceeding the minimum number of samples for studying structural equation models. In addition, the maximum likelihood estimation method can only be used when the measured variable is a continuous variable and obeys a multivariate normal distribution. Researchers usually use skewness and kurtosis to analyze whether the sample data conforms to the normal distribution. When the absolute values of skewness and kurtosis are both less than 2, we think that the sample data obey the normal distribution and can be estimated by the maximum likelihood method for data analysis. is paper uses Spss24.0 to measure the skewness and kurtosis of the sample data.
SPSS software: SPSS is the world's first statistical software that uses a graphical menu-driven interface. Its most prominent feature is that the operation interface is extremely friendly, and the output results are beautiful. It displays almost all functions in a unified and standardized interface. It uses Windows to display the functions of various data management and analysis methods, and the dialog box displays various function options. Users only need to master certain Windows operating skills and be proficient in statistical analysis principles, and they can use the software to serve specific scientific research work. SPSS uses a method similar to EXCEL form to input and manage data. e data interface is more general, and data can be easily read from other databases. Its statistical process includes commonly used and more mature statistical processes, which can fully meet the work needs of nonstatistical professionals. e output result is very beautiful, and it is stored in a special SPO format, which can be transferred to HTML format and text format. For users who are familiar with the old version Mobile Information Systems of the programming operation mode, SPSS also specially designed a grammar generation window. e user only needs to select each option in the menu and then press the "Paste" button to automatically generate a standard SPSS program, a great convenience for middle and advanced users.
SPSS for Windows is a combined software package that integrates data entry, sorting, and analysis functions. Users can select modules according to actual needs and computer functions to reduce the requirements for system hard disk capacity, which is conducive to the popularization and application of the software. e basic functions of SPSS include data management, statistical analysis, chart analysis, and output management. e SPSS statistical analysis process includes several categories such as descriptive statistics, mean comparison, general linear model, correlation analysis, regression analysis, log-linear model, cluster analysis, data reduction, survival analysis, time series analysis, and multiple response. ere are several statistical processes in the regression analysis, such as linear regression analysis, curve estimation, logistic regression, probit regression, weighted estimation, two-stage least squares, nonlinear regression, and other statistical processes, and each process also allows users to choose different methods and parameters. SPSS also has a special drawing system, which can draw various graphics based on data.
e results are shown in Table 9. e absolute values of the skewness and kurtosis of all variables are less than 2, which meets the normal distribution of the data and meets the requirements of structural equation analysis. e results are shown in the table. It can be seen that the standardized path coefficient from expected confirmation to user satisfaction is 0.45, and the significance coefficient P is less than 0.01; the standardized path coefficient from expected confirmation to perceived usefulness is 0.80, and the significance coefficient P is less than 0.01, and the degree of confirmation indicates that the platform constructed in this article has a positive effect on perceived usefulness and user satisfaction.

Conclusions
e experimental results show that the continuous use behavior analysis of the edge computing-based knowledge  payment platform under the mobile information system proposed in this paper is better than the traditional knowledge platform, and the statistical indicators are more comprehensive. e article adds an explanation about the computing task offloading strategy of the firework algorithm, mobile edge computing, and an overview of RFID technology. In terms of simplification and interpretation of the algorithm, the optimal task allocation ratio is solved according to the computing and communication capabilities of the MEC equipment. is article uses the questionnaire survey method and the comparative experiment method to analyze the rotating component matrix. e experimental results show that there are 98 people under the age of 18 in the questionnaire, accounting for 19.1% of the total, 201 people aged 18-29, accounting for 39.1% of the total, 142 people aged 30-39, accounting for 27.6% of the total, and 73 people over 40 years old, accounting for 14.2% of the total number. It can be seen from the data that the sample age is mainly concentrated in the 18-29 years old, followed by the 30-39 years old; the sample age is biased towards young people. e shortcomings of this article are as follows: (1) the amount of data collected in the sample is relatively limited. In future research, the experimental sample can be expanded to obtain the credibility of the research results. (2) e firework algorithm designed in this paper does not have separate control variables in the process of simplifying the algorithm. Although the actual experimental results did not have an impact, the reliability of the algorithm should be more rigorously studied in future research.

Data Availability
No data were used to support this study.

Conflicts of Interest
e authors declare that there are no conflicts of interest with any financial organizations regarding the material reported in this manuscript.