Abstract

Under the background of the vigorous development of China’s market economy, the marketing mix is constantly updated, which promotes the all-round development of various industries. Social media marketing has formed a relatively solid theoretical and practical foundation, especially with the continuous updating and iteration of Internet technology and the improvement of people’s requirements for experience, and we must find ways to optimize the methods of social media marketing. This study mainly introduces several optimization methods of social media marketing based on deep neural networks and advanced algorithms, and the experiments of gradient-based back-propagation algorithm and adaptive Adam’s optimization algorithm show that the proposed optimization algorithm can easily achieve the global optimal state based on the combination of back-propagation algorithm and Adam’s optimization algorithm. Accuracy of marketing is very important, so we introduce a scheme of how to accurately market, and the scheme is effective. Firstly, the FCE model is constructed by a three-layer back-propagation neural network, and then, the data input layer is designed to achieve the effect of the model.

1. Introduction

Digitization has increased the importance of online marketing forms, including social media (SM) marketing [1]. Social media marketing is an influential marketing method. Liking or sharing social media messages can increase popular cohesion and the effect of message dissemination [2]. Social media has become a way of communication in the 21st century, enabling us to express our thoughts and feelings in a new way [3]. Of course, the development of social media marketing is not the performance of fashion, but the result of business digitization [4]. The analysis results show that social media marketing activities indirectly affect satisfaction through social identity and perceived value [5]. At present, social media marketing plays an important role in developing business activities, and ICT support enhances the efficiency of social media marketing [6]. As we all know, social media analysis is a complex process, involving a series of complex tasks [7]. Therefore, this study uses deep neural network [8] and evolutionary algorithm [9] to optimize. Firstly, this study analyzes the current situation of social media marketing [10] and then optimizes the social media marketing method using a deep neural network model and two evolutionary algorithms (SGD and Adam) [11]. After optimization, three experimental results are analyzed: gradient sensitivity analysis and complex method, which are compared with other existing methods, namely finite difference method, automatic difference method, and analytical method [12]. The result is obtained using the three-layer back-propagation neural network (BPNN) [13] of FCE model [14] formula and analyzed according to the hierarchy. Finally, the detailed error analysis of Adam’s method is used to draw a conclusion [15].

2. Current Situation of Social Media Marketing

2.1. Concept of Social Media

Social Media refers to the content production and exchange platform based on user relationship on the Internet. Technologies that allow people to write, participate, evaluate, discuss, and communicate social media are tools and platforms for individuals to exchange views, ideas, experiences, and opinions at this stage, mainly in the form of compilations, forums, speeches, and so on. With the rise of the Internet, social media has exploded. Nowadays, it has become the main tool for people to communicate, express their views, and share their experiences. People browse seemingly casually. Once the information in the website appears as shocking news or the focus content that can arouse people’s attention, social media bears the brunt of becoming an important form of news dissemination. It is worth emphasizing that one involves many people, and the other involves automation. Without these factors, social media does not constitute. Social media relies on the development of Web 2.0. If the Internet cannot provide more measures for Internet users, social media will lose public foundation and technical support.

2.2. Advantages of Social Media Marketing

Social media is characterized by low threshold, high speed, and low cost, which can play an important role in business marketing, which means that any company can spread information and carry out marketing activities through social media. On the platform, it is quickly spread through comments. Generally speaking, compared with traditional marketing, social media marketing is usually a way of communication, and it also allows people to express their opinions through the influence of ideas. It is possible to share their opinions and content. In Weibo marketing, the company can have strong interaction through language, emotion, other forms of commercial marketing (or public), social media, and Internet marketing. In addition, long-term costs, especially compared with high-quality traditional media, can increase impact through social media and marketing.

Using the interactive and participatory characteristics of social media itself, you can publish advertisements, introductions, and other information related to your products on it. Users can choose content directly according to their own interest points and communicate with enterprises. It helps users understand the brand or the work of the enterprise, so as to attract consumers’ attention and mobilize consumers’ purchasing power.

2.3. Disadvantages of Social Media Marketing

Social marketing has its own advantages, but there are also some problems, including the ease of use of social media marketing. The existing companies do not pay enough attention to other information problems [16]. You cannot get much information about social media and some content marketing methods. Increasing personal activities can bring profits in a short time. With the advent of the new era, there is no segmentation of consumers. Social media favors potential consumer groups. They are still waiting to develop marketing strategies for social media. With the development of network technology, people’s life gradually shifts to the Internet and mobile Internet. However, while we enjoy the convenience brought by the Internet, the rapid development of the Internet also brings us the problem of information explosion. In the Internet, we are faced with an exponential increase in the available information (such as goods and information), and how to quickly dig out useful information for us in these huge information data has become an urgent problem to be solved, so the concept of network precision marketing arises at the right moment. Different industries and environments have completely different marketing strategies. How to make the marketing with different needs to achieve the overall promotion purpose is difficult, and it is necessary to achieve high quality, lower price difference, and high consumer participation. To better analyze the effect of enterprise marketing mix, thus, social media marketing optimization is expected to have a good prospect.

3. Optimization of Deep Neural Network and Evolutionary Algorithm

Due to the outstanding performance of neural network in various fields, more and more researchers begin to pay attention to the parameter [17] optimization of neural network. At the earliest time, some traditional optimization algorithms were also tried to optimize neural network parameters, such as the Newton method and quasi-Newton method [18]. However, the calculation and inversion of the Hessian matrix consume time and space. With the increase in network parameters, the optimization algorithm based on the Newton method is not suitable for optimizing neural network [19]. At present, most neural network optimization methods are gradient-based optimization methods. Next, we will introduce several most popular optimization algorithms in detail.

3.1. SGD and Adam

Random gradient descent algorithm [20] (SGD). At present, SGD refers to gradient descent based on small batch, which is the first proposed neural network optimization method. In essence, each training process does not calculate the gradient of the whole data set or the gradient of samples, but calculates the gradient of a small part of samples randomly selected from the data set to perform weight update [21]. The batch size is usually 32.64, 256, etc., which are related to the size of the data set. Through this continuous iterative training, a better weight parameter is finally obtained. The weight update formula is as follows:where η is the learning rate, which is used to control the step size of the gradient update for each batch, and represents the batch gradient of the current t-th iteration. This is the basic form of gradient-based back-propagation algorithm. SGD also has some disadvantages: (1) it is difficult to select the appropriate learning rate, and all parameters have the same learning rate; (2) local optimization is easy to converge and easy to decline in some cases, which is the saddle point.

To solve these defects, a dynamic algorithm is proposed. It basically imitates the concept of motion in physics and replaces the actual gradient with the previous velocity. The weight formula is as follows:

Among them, ψIs is the momentum factor, which controls the effect of each momentum. After adding momentum, at the initial stage of descent, the last parameter update was used, and the descent direction is the same. Multiplying by a larger momentum factor can accelerate well; in the middle and end of landing, when the underground value fluctuates up and down, the gradient tends to 0, and the pulse coefficient will improve the ability of modernization, jump out of the trap, prevent vibration, and accelerate convergence.

Adam’s optimization algorithm. Using the instantaneous estimation of gradient, the learning rate of each network teacher can be adjusted. Adam’s optimization algorithm mainly makes the iteration rate within a certain range through offset correction and achieves the effect of more stable parameters. The formula is as follows:

µ and υ are a parameter that balances the two terms. It can be seen that the Adam algorithm forms a dynamic constraint on the learning rate and has a certain range. This means that the learning ratio has been adjusted and the impact of previous scores has been taken into account. The calculation of different adaptive learning rates for different parameters is also applicable to most unsolved improvement problems, whether large data sets or multidimensional spaces. Neural networks must be the most universal. However, the above improved algorithm is basically a gradient lifting algorithm. As the number of grid layers increases, the gradient explodes and the color gradient disappears. This will happen in the process of back propagation, which makes the formation of network model difficult. At the same time, the initialization of the network model will also have an important model. At present, it is generally possible to find a set of appropriate initialization parameters only through continuous attempts.

3.2. FCE Model under BPNN

A fuzzy comprehensive evaluation (FCE) index system is the basis of comprehensive evaluation. Whether the evaluation index is suitable will directly affect the accuracy of comprehensive evaluation. In the construction of the evaluation index, this study extensively covers the industry data and related expert opinions of the evaluation index system. To better analyze the effect of enterprise marketing mix, a four-level EIS is established, including 4 main indicators (financial benefit, consumer benefit, internal operation, and learning and growth), 13 secondary indicators, and 33 tertiary indicators in Table 1.Step 1. Build judgment matrix.Combined with the actual survey results and index values, the weight coefficient is determined. After comparison, the matrix according to the subscale is judged.Step 2. Sort the indexes of each layer and calculate the consistency index.The potential roots, generating matrix JM, and eigenvectors for applications are calculated and sorted:where λMax is the maximum potential root.

Finally, it is calculated by the sum-product method λMax and to determine the weight ωt. Secondly, it tests the consistency of the judgment matrix JM constructed by enterprise marketing portfolio decision-makers. The consistency index can be calculated by the following formula:

CR can be defined as the following formula:where RI is a random symmetrical average index.

The FCE model of three-layer neural network is defined in our method. The model adopts the characteristics of commercial marketing integration strategy, takes it as the object of investment, and takes FCE as the value of products. The BPNN is shown in Figure 1. The following is a detailed description of the network.

The nodes determined in the input layer are mainly determined according to the problems that need to be solved in the evaluation of enterprise marketing mix strategy. We use to represent the number of nodes in the input layer, and the output and input layers are represented as follows:

The function of the middle layer is to solve the membership degree of the input index to each evaluation layer. We assume that there are evaluation indexes, so we need nodes.

The membership function is defined as formula (11), where .

The membership function is defined as formula (12), where .

The middle layer is defined by the following formulas (13) and (14):

and output represent the member function and the membership degree of the evaluation level, respectively.

The output layer FCE is the input vector of enterprise marketing mix strategy, and the evaluation vector corresponding to the evaluation hierarchy field from the evaluation hierarchy field is derived.

Different industries and environments have completely different marketing strategies. To make the marketing with different needs to achieve the overall promotion purpose, it is necessary to achieve high quality, lower price difference, and high consumer participation. Not only that but it also integrates IT to manage the marketing process and propose the business framework.

FCE is implemented by the following process:Step 1: The index classification is defined.Step 2. The index content of each layer is defined.

The main indicators as an example to illustrate the construction process of the matrix are taken. First, Ei is evaluated using the ith data of 0 in EIS. δij is set as the viscosity of data Ei in the jth evaluation stage. Then, the result of data EI can be expressed as a fuzzy FI group, that is, the fuzzy subset of R evaluation group (a unique evaluation group):

Taking the membership of each single factor evaluation set as an element in the row, the following matrix can be established:

3.3. Spatiotemporal Data Clustering

The massive data are normalized and analyzed by K-means clustering [22]. M1 and M2 are simplified business data sets, and the ordered data are processed based on neural network, so as to obtain accurate spatiotemporal data [23].

A parallel computing model of big data can be realized using an artificial neural network to simulate biological neurons. As shown in Figure 2, and , respectively, represent the control states of the three layers, which are mutual changes between the input state and the hidden state.

3.3.1. Data Input Layer Design

Samples are extracted from m1 and m2, and randomly divided into training and testing samples, fully considering the balance between them. This can improve the accuracy of precision marketing.

In the process of standardized clustering, m1 and m2 are inputs for accurate marketing decision-making. The formation value is [0.2, 0.8]. The normal formula is as follows:

In the micro-marketing classification, the square is used to standardize the total error of the function, and the spatial boundary is adjusted according to the overall difference principle of the components, so as to obtain a new spatial division and the corresponding decline of the model, and even the functional standard is very close. Here, the user purchases frequently, and the average quantity of goods purchased (M1 and M2 points) and the standard function of the sum of squares of errors are as follows.

3.3.2. K-Means Data Clustering Process

The similarity is calculated according to the average value of objects in m1 and m2 clusters. The specific algorithm flow is as follows:Step 1: Use the Euclidean distance in equation (1) to randomly assign all objects to k non-empty clusters.Step 2: Calculate the average value of m1 cluster and m2 cluster, and use the average value to represent the corresponding cluster.Step 3: Specify the nearest m1 cluster and m2 cluster according to the distance between each object and the center of m1 cluster and m2 cluster.Step 4: Then, turn (2) to recalculate the average value of each cluster. This process is repeated until the square of the error and the criterion function satisfy the formula (3). In the data clustering stage, the commercial marketing index set X is input into K-means, and the regional distance between data samples is similar. The user’s shopping frequency m1 and the average number of goods purchased m2 are the control relationship of the error square sum criterion function, and the simplified clustering result of the commercial marketing data set can be obtained, as shown in Figure 3.

According to the description in Figure 3, the blue dot is the original commercial marketing data set.

The green dot is the original data center set, and the red dot is the clustering result of the user's shopping frequency m1 and the average number of purchases m2. It can be seen that the number of aggregated data points decreases significantly compared with the original blue points.

4. Result Analysis

4.1. Gradient Sensitivity Analysis

The basic principle of sensitivity analysis method is to remove or change the value of components and observe the changes in decision results. The components with large changes in decision results correspond to important components. The application of sensitivity analysis can only answer which features affect the classification result of the input data, but cannot answer the more basic questions such as “why is the classification result.” The general sensitivity analysis method, the occlusion method, uses a gray block to block different parts of the image to observe the impact on the label of image classification results. The disadvantage of this method is that it is difficult to determine the size, color, and position of the block, and different interpretation results may be obtained by selecting different parameters. The sample sensitivity analysis method makes small disturbance changes to the training sample, and finally the convergent parameters will also change, and then obtains the derivative of the parameter change to the small disturbance, which is called the influence function of the sample. This method can also be applied to the generation of countermeasure samples. Only adding some invisible disturbances to some samples with large influence function is enough to interfere with the judgment results of other samples. Rise is a black box method. It detects the model with random shielding of multiple input images, obtains the corresponding output, and estimates the importance of each pixel experimentally. The experimental effect is better than the occlusion method, grad cam, and life. Fido is a model diagnosis framework for calculating and visualizing the importance of classifier features. It marginalizes the occluded area and adjusts the generation model of the non-occluded part of the image to sample the counterfactual input that changes or retains the behavior of the classifier. Using a strong conditional generation model to generate a saliency map, it can better identify relevant and concentrated pixels.

The saliency map method is to do interference to pixels, and then observe and analyze the model probability, and calculate the pixel gradient, so as to get the importance. LRP method has been widely used in computer vision tasks. It is a pixel-by-pixel decomposition method. Through back-propagation calculation according to graph structure, the score is adjusted to the input level. The adjustment process follows the principle of conservation. Its core idea is to decompose the activation function of neurons according to the input contribution, which is realized by the first-order Taylor expansion at the root point. The difficulty of depth Taylor expansion lies in the selection of root points. This method is first used to interpret the prediction results of hierarchical network and then used to interpret the prediction results of convolutional neural network. The color gradient level at the end of the layer is used to draw the heat map, emphasize the importance of pixels in the pyramid, and point out that the total ratio and province produce one, which is equivalent to using the color gradient to calculate the activation value under specific conditions. The hierarchical camera level makes a thermal map of the group while saving the original network structure. To solve the saturation problem, scientists suggest a step-by-step method to improve the artificial depth. In MNIST and other data sets, the method of lifting depth is better than the color gradient. Grad cam grade attribute-based method should meet the characteristics of sensitivity axiom and invariance axiom. The integrated gradient method proposed by scholars has strong theoretical rationality, but it still does not solve the interaction between input features and the logic adopted by the network: deconvolution network and guided back propagation violate the sensitivity axiom, and DeepLIFT and LRP methods; smoothness is a simple way to better understand gradient sensitive maps. General interpretation methods, such as grid graph, back propagation, and LPS, cannot be interpreted correctly in text (can be regarded as a simple depth model), let alone the depth of neural network. Based on the analysis of the linear model, the proposed pattern net and pattern attribution methods complete the theory of the linear classification model, improve and perfect the understanding depth neural network from the theoretical, qualitative, and quantitative aspects, and only need one back-propagation calculation, which can support the real-time visual interpretation of the model decision-making. Pattern net can estimate the correct direction of the signal, which improves the visualization process of deconvolution network and guided back propagation. Pattern attribution extends the deep Taylor decomposition method to learn how to set the root point from the data. It is worth noting that it is possible to attack the interpretation method, which will form a wrong interpretation.

4.2. FCE Result Analysis

The results of two-level FCE are shown in Table 1, which is obtained by analyzing three-level data. Combined with the three-level data to evaluate the marketing integrated system and the integration of the two-level FCE results, finally the first-level FCE results are obtained. After weight analysis, the final FCE results are obtained. By importing 600 final result data samples into BPNN for training, it can be closer to the complex assignment between two samples in Table 2.

The actual brand data case is used to verify the role of the model in marketing strategy. Among them, collecting online big data directly effectively avoids the influence of human factors. Visualization technology forms pictures, which can make us see the marketing effect more intuitively.

The number of positive comments forwarded by different products varies with time, as shown in Figure 4. Under the influence of marketing strategy, the positive evaluation rate of brand 1, 2, and 5 products is obviously higher than that of other products. Six brand forward positive comments gradually decrease over time, on the fifth day, gradually tend to be both 0, in particular, for different brands, falling forward quantity as time mode is different, such as brand 2, quickly fell sharply after the first day, to maintain a relatively stable level, after comparison, and brand 3 has increased in the second day, started to fall on the third day. Brand 4 shows a steady downward trend regularly every day. In summary, the differences between different brands may be caused by the brand’s audience and customer groups. Figure 5 shows that the forwarding volume of positive comments is directly proportional to brand influence, and product optimization needs widely disseminated comments.

More than 10 million data samples on positive comments, reposts, and likes were averaged and standardized. Based on the proportional coefficients μL, μC, μF, and μCF, the consumer influence trend chart of the corporate marketing mix strategy was drawn. Combining Figures 46, it can be seen that the influence of consumers is not closely related to the number of comments or followers in important periods; the influence of consumers is not much related to the frequency of comments. Therefore, the semantic information in comments needs to be further mined.

Figure 7 shows the review trends of the three brand products. Obviously, the product evaluations of these three brands are often positive. Despite the negative vocabulary, reviews have a positive impact on product optimization. Figure 8 shows the impact of our strategy on reviews and product optimization sentiment.

4.3. Error Analysis

In Section 3.3, the algorithm extracts the characteristic data and obtains the error of the average value of the sales data of the corresponding subsegment. This algorithm uses the feature data extracted during the algorithm operation and its segment. To improve the accuracy of the output, the neural network is trained. The output data as the input data of the next-level neural network in Tables 3 and 4 are used.

By applying the error feedback principle to adjust the grid weight, we can see from the calculation results that the sum of the errors between the known network model output and the training sample values is less than a certain expected value, and the final actual error value is very small. When extracting the characteristics of data t = 10, the main data extracted in the set will be shown in Figure 9.

5. Conclusion

This study presents an analysis model based on an artificial neural network. Gradient-based repropagation algorithm and evolutionary algorithm are biologically inspired global optimization algorithms proposed in recent years. The three-layer BP neural network is the basis of establishing a fuzzy comprehensive evaluation model of marketing. In addition, we also designed a normalized clustering scheme to deal with massive data, Euclidean distance classification, and the construction of adaptive precision marketing objective function. Through relevant experiments, the effectiveness of the model is confirmed, and the problems of low spatial correlation and initial data redundancy in precision marketing are solved, and the impact of decision-making on consumers is analyzed.

Data Availability

The experimental data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding this work.

Acknowledgments

There was no specific funding to support this research.