DIM: Adaptively Combining User Interests Mined at Different Stages Based on Deformable Interest Model

User interest mining is widely used in the fields of personalized search and personalized recommendation. Traditional methods ignore the formation of user interest which is a process that evolves over time. )is leads to the inability to accurately describe the distribution of user interest. In this paper, we propose the interest tracking model (ITM). To add the timing, ITM uses Dirichlet distribution and multinomial distribution to describe the evolutional process of interest topics and frequent patterns, which well adapts to the evolution of user interest hidden in short texts between different time slices. In addition, it is well known that user interest is composed of long-term interest and situational interest including short-term interest and social hot topics. State-of-the-art methods simply regard the users’ long-term interest as the users’ final interest, which makes those unable to completely describe the user interest distribution. To solve this problem, we propose the deformable interest model (DIM) which designs an objective function to combine users’ long-term interest and situational interest and more comprehensively and accurately mine user interest. Furthermore, we present the degree of deformation whichmeasures the subinterest's degree of influence on final interest and propose in DIM the influence real-time update mechanism.)emechanism adaptively updates the degree of deformation through the linear iteration and reduces the degree of dependence of the interest model on training sets. We present results via a dataset consisting of Flickr users and their uploaded information in three months, a dataset consisting of Twitter users and their tweets in three months, and a dataset consisting of Instagram users and their uploaded information in threemonths, showing that the perplexity is reduced to 0.378, the average accuracy is increased to 94%, and the average NMI is increased to 0.20, which prove better interest prediction.


Introduction
User interest mining refers to establishing a user interest model by analyzing a large amount of user behavior data.
rough user models of high quality, it is able to describe the real interest of users, making it possible to implement personalized services for users. In recent years, user interest mining has been widely used in the fields of personalized search and personalized recommendation.
Describing the distribution of user interest is the core of user interest mining. Psychologists believe that the formation of user interest is a process that evolves over time [1]. erefore, tracking and describing the evolution of user interest is the biggest challenge in describing the distribution of user interest. In previous research works, static topic models were often used to describe the distribution of user interest, such as the latent Dirichlet allocation (LDA) model proposed by Blei et al. in 2003 [2]. However, in real life, people's subjective intentions often change with time, and user interest will continue to evolve with time. Static topic models are difficult to meet this demand. erefore, some literature studies [3][4][5][6][7][8][9][10] attempt to introduce time dimension to track the dynamic changes of user interest. e distribution of user interest described by the current dynamic topic models is a Gaussian distribution centered on the superparameter α of the interest distribution of the aforementioned time slice and cannot adapt to user interest that suddenly changes between different time slices [4].
In addition, psychologists divide user interest into longterm interest and situational interest [11][12][13]. Long-term interest refers to a relatively stable and persistent individual tendency that develops over time. Situational interest is considered as a relatively passive and transient emotional state triggered by certain conditions in the environment, including short-term interest of users and social hot topics. e traditional methods of user interest mining simply regarded the user's long-term interest as the user's final interest, which makes the traditional methods unable to completely describe the distribution of user interest.
On this basis, it should also be considered that the influence of long-term interest, short-term interest, and hot topics on user interest is updated in real time. is variability requires that the new algorithm must have an adaptive mechanism. e stable influence of three interests not only ignores the possibility that the user generates a new interest but also the effect of interest mining which is strongly dependent on training sets. Once the training set is changed, it is necessary to retrain the model in order to update the influence of three interests, which is undoubtedly very time consuming.
In order to solve the aforementioned problems, this paper proposes a user mining method based on the deformable interest model to adaptively integrate users' longterm interest and situational interest. e contributions of this paper mainly include the following aspects: (i) For tracking and describing the evolution of user interest, this paper introduces the time dimension and proposes the interest tracking model (ITM), which maps annotated words to the frequent pattern space and uses the Dirichlet distribution and the multinomial distribution to describe the evolutional process of user interest and frequent patterns between different time slices, respectively. (ii) For solving the problem of the integrity of user interest, this paper proposes the deformable interest model (DIM), which integrates users' long-term interest and situational interest and more comprehensively and accurately mines user interests. (iii) For solving the problem that the influence of longterm interest and situational interest on user interest needs to be updated in real time, this paper proposes the deformable interest model (DIM), which uses the real-time update mechanism to adaptively update the influence of long-term interest, short-term interest, and hot topics on user interest. e real-time update mechanism not only considers the possibility of interest change but also reduces the dependence of the interest model on training sets.

Related Work
Building a topic model is the primary means of mining user interest. e topic model is a language model that uses the Bayesian statistics and machine learning methods to discover the underlying semantic content of nonlabeled documents and uses these latent semantics to predict the future characteristics of the document set. e topic models used for interest mining in the early days were static topic models of which the establishment was independent of time. David et al. proposed a probabilistic topic model called LDA (latent Dirichlet allocation) in 2003 [2]. Because of its good mathematical foundation and flexible scalability, LDA has been widely paid attention and used in various research fields since its introduction. However, due to the semantic gap, the application of LDA on sparse short texts makes it difficult to confirm the semantic consistency of words. As a result, some methods aggregated short texts into long texts to reduce inaccuracy. Other methods enriched original data through external knowledge basis. ese methods are not always effective because there may be semantic inconsistencies between the pseudo-long-text after aggregation and the original short text. Another recent work finds the embedding topic model [14] that combines traditional topic models and word embedding which can well analyze the semantic connotation of large text sets with many long-tail words and low-frequency words. However, using word embedding to represent documents makes the features too low-level, and it is often difficult to obtain satisfactory results. In addition, LDA treats a word as a unit, which undoubtedly reduces semantic accuracy. Wallach [15] proposed the bigram LDA model, which treats bigram as a unit. On this basis, Wang et al. [16] proposed topical N-grams (TNG). Jhnichen et al. [17] proposed scalable generalized dynamic topic models which used stochastic processes to introduce stronger correlations. ese methods can break the limitations of the models with bag-of-words to a certain extent and find common phrases and potential topics in the text, but the model is complex. In addition, association rule mining technology is also an effective data mining technology, such as association rule mining applied to streaming data [18,19] and association rule mining applied to dynamic databases [20,21]. Compared with other association rule mining techniques, the frequent pattern mining model has the simplest structure.
is inspired us to introduce frequent pattern mining into static topic modeling so that topic modeling based on bagof-words can be transformed into topic modeling based on pattern sets. Nevertheless, user interest is dynamically formed over time, and static topic models cannot satisfy this need.
In order to solve the problem that the static topic model cannot change the subject content over time, topic models with dynamics have also been widely studied. ese include DTM [3], cDTM [6], TTM [4], and D-ETM [22]. e methods solved the problem that the static topic model cannot respond to the change of the user's related information in time, but they all work in the context of long texts. Moreover, in DTM [3] and cDTM [6], the distribution of user interest comes from the Gaussian distribution centered on the superparameter α of the interest distribution in the last time slice and cannot adapt to the sudden change of user interest. At the same time, the Gaussian distribution and the multinomial distribution are not conjugated, and the model is not interpretative and practical. Some recent works 2 Mathematical Problems in Engineering [23,24] found embedding representations that vary over time. erefore, D-ETM came into being on the basis of ETM. However, how to process higher-level features on the basis of the dense vector space is the key to improving the effect of the algorithm. In addition, Liang et al. proposed a dynamic topic model called UCIT-L for short texts [25]. UCIT-L infers the user's interest based on the user's information in multiple time periods and the information of its followers. e disadvantage is that the amount of calculation is too large, and the degree of coincidence between the user's interest and the followers' interest is uncertain. In order to alleviate the key problem of nonconjugation caused by potential Gaussian variables and their subsequent nonlinear transformation of count values of the model, Linderman et al. proposed the Polya-gamma augmentation in 2015 [5]. is approach helped to alleviate the problems with DTM [3] and cDTM [6], but it did not necessarily improve the performance. e interest tracking model (ITM) proposed in this paper can essentially solve the nonconjugation problem caused by Gaussian distribution and multidistribution and accurately track and describe the user interest that evolves over time.
e above interest mining methods were devoted to describing the dynamic process of user interest and simply regarded the user's long-term interest as the user's final interest, ignoring the situational interests consisting of short-term interest of the user and social hot topics, which cannot completely describe the distribution of user interest [1,[11][12][13]. e variable interest model proposed in this paper is a model that fully considers the user's real interest, adaptively integrates long-term interest, short-term interest, and current hot topics, and depicts the real interest formation process.

Mining User Interest Based on the Deformable Interest Model
In this paper, we consider that user interest is composed of long-term interest and situational interest including shortterm interest and the current social hot topics. erefore, we propose a method of mining user interest based on the deformable interest model (DIM) which is aimed at fusing the above three interests. Long-term interest is mined by the interest tracking model (ITM), short-term interest is mined by LDA-FP [2,26], and the current social hot topics are obtained by the knowledge base [27,28]. Using DIM, three interests are adaptively combined to obtain user interest.

Problem Definition.
A social network has a set of users U � u 1 , u 2 , . . . , u a , . . . , u n , n ∈ Z; the user u a uploads a set of pictures I � i a1 , i a2 , . . . , i am , m ∈ Z, where u a means the a-th user. At the same time, when uploading a picture, the user adds a set of annotated words W � w i1 , w i2 , . . . , w ia , . . . , w ip }, p ∈ Z, to the i-th picture according to their own interests, where w ia means the a-th annotated word of the i-th picture. All annotated words for different pictures marked by user u a in time slice T are denoted as . It is known that user interest changes as time elapsed, and user interest is easily changed by the situational interest including the user's short-term interest and the current social hot topics. e tasks of DIM are as follows: (1) give the distribution of user's short-term interest in the last time slice, the distribution of user's long-term interest that changes over time, and the distribution of the current hot topics; (2) combine the user's long-term interest and situational interest including short-term interest and current social hot topics; and (3) adaptively update the degree of deformation which measures the subinterest's degree of influence on final interest.
First, we process different categories of annotation words into a corpus; then, we preprocess the corpus; and finally, we mine frequent patterns from the corpus by using FP-growth algorithm [26] and establish a frequent pattern library. is pattern library is defined as C � c 1 , c 2 , . . . , c n , n ∈ Z, is the tth word and np i is the number of words in the frequent pattern i. LDA [2] represents the text by mapping the text to bag-of-words, but this method does not apply to short texts with sparsity problems, so we represent the user's annotated words by mapping words to the frequent pattern library. We denote the user's annotated words as as the input of ITM to obtain long-term interest of users L � (a L1 w L1 ,a L2 w L2 ,...,a Ln w Ln ), n ∈ Z. At the same time, we consider d T a as the input of LDA-FP to obtain short-term interest of users S � (a S1 w S1 ,a S2 w S2 ,...,a Sn w Sn ), n ∈ Z, and obtain the current hot topics based on the knowledge base H � (a H1 w H1 ,a H2 w H2 , ... ,a Hn w Hn ), n ∈ Z. Each element in L, S, and H is represented by a weight coefficient (a i1 ,i ∈ S,L,H) and the corresponding labeled-word (w i1 ,i ∈ S,L,H). Eventually, we consider (L,S,H) as the input of DIM to adaptively combine long-term interest, short-term interest, and current hot topics.

Mining User Long-Term Interest Based on the Interest
Tracking Model. User interest is known to change over time, but this change does not happen suddenly; it has some kinds of continuity between time periods. ITM defines the "userinterest" vector with a first-order Markov property and considers that user's interest distribution in the current time slice Θ t is basically around that in the last time slice Θ t−1 . erefore, we define the "user-interest" distribution Θ t of the current time slice as where Z is the number of interests, α t is the hyperparameter of the "user-interest" distribution in the current time slice, and θ t,z � P(z | t) represents interests of a user, which is the probability that the user is interested in interest z at time t, where θ t,z ≥ 0, Z θ t,z � 1. Correspondingly, the latent semantics of each interest in the current time slice will also change. We confirm the latent semantics of interest by finding the "interest-frequent Mathematical Problems in Engineering 3 pattern" distribution. Similarly, we define the "interestfrequent pattern" distribution Φ t,z � ϕ t,z,i C i in the current time slice as where β t,z is the hyperparameter of the "interest-frequent pattern" distribution in the current time slice and ϕ t,z,i � P(i | z, t) represents trends in an interest, which is the probability that the frequent pattern Based on (1) and (2), the process of generating each frequent pattern in the model is described as follows: for each frequent pattern, the "user-interest" distribution Θ t in the current time slice is determined by the prior knowledge α t in the current time slice and the user-interest distribution Θ t−1 in the previous time slice together. Next, extract an interest from Θ t , and then deduce the "interest-frequent pattern" distribution Φ t,z of the current time slice according to the prior knowledge β t,z in the current time slice and the "interest-frequent pattern" distribution Φ t−1,z in the previous time slice. Finally, extract a frequent pattern from Φ t,z corresponding to the interest Θ t .
It can be seen from the above that the ITM maps annotated words to a frequent pattern set to represent a collection of annotated words for each user that is considered short text. Each frequent pattern contains a set of annotated words that occur frequently at the same time. In addition, the ITM uses Θ t−1 and Φ t−1,z of the previous time slice to correct the Dirichlet parameter in the current time slice to achieve the purpose of tracking the evolution of user interest. At the same time, the ITM maintains the conjugate distribution of Dirichlet-multinormal.
is design reflects the mathematical nature of user interest's evolution and makes the model interpretable. e probability model diagram for this model is shown in Figure 1.
How to reverse user interest based on the known frequent pattern of each user's labeled words is the purpose of constructing the interest tracking model. In this work, we estimate parameters in the ITM based on a random EM algorithm [29], where the Gibbs sampling of latent topics and the maximum joint likelihood estimation of parameters are alternately iterated. e ultimate goal of building ITM is to get the posterior probability: us, the problem of solving the posterior probability P(Z t | W t , α t , β t ) is transformed into the problem of solving the joint distribution of user interest and patterns. From the definition of Dirichlet distribution and multinormal distribution [2], we make the following inference: where n t,z is the number of patterns that have been assigned to interest z at time t, n t,z,i is the number of times for which pattern p i has been assigned to interest z at time t, and Γ(x) is the gamma function. Solving the parameters in this joint distribution by Gibbs sampling and maximum likelihood estimation [29] yields the following results: ϕ t,z,i � n t,z,i + β t,z ϕ t−1,z,i n t,z + β t,z .
When inferring the current interests Θ t and trends Φ t , ITM only uses the current data. erefore, compared with the traditional model, it not only describes the evolution process of the user's long-term interest without increasing the potential variables but also reduces the calculation amount and improves the calculation speed.

Adaptively Fusing User Subinterests Based on the Deformable Interest Model.
User interest is not instantaneously formed by long-term interest, short-term interest, and current hot topics but gradually changes to the final state. To describe this process of change, we consider user interest as a deformable interest model consisting of three variables of sudden interest (hot topics), short-term interest, and longterm interest. In the deformable interest model (DIM), the degree of deformation of each interest controls the deformation of the entire system. e degree of deformation of each interest is determined by the interaction of the respective interest and its similarity to user interest, thus ensuring that important interest has a significant impact. e essence of the deformable interest model is the spring model. e structure diagram of the spring model is shown in Figure 2 [30]. e deformable interest model can be defined as a quaternion (L, S, H, b), where L represents the interest tracking model, S represents LDA-FP [2,26], H represents the knowledge base of the current hot topics [27,28], and b represents the bias value. Each submodule is represented by a multigroup (a i1 c i1 , a i2 c i2 , a in c in ), i � L, S, H { }, where the number of elements is determined by the number of interest set by the submodel, c it (t � 1, 2, . . . , n) is the t-th pattern of subinterest i, and a it is the corresponding weight coefficient. e score of the target hypothesis is equal to the sum of the similarity between long-term interest and real interest, short-term interest and real interest, and hot topics and real interest minus the difference between short-term interest and long-term interest and hot topics and long-term interest: where i belongs to S, H { }, and R is the user interest. R is also represented by a multigroup (a R1 w R1 , a R2 w R2 , . . . , a Rn w Rn ).
e number of elements is determined by the total number of patterns. e weight coefficient is determined by the frequency of occurrence of the pattern, which means a Rj � tf cj × idf cj , j � 1, 2, . . . , n, tf cj � (n j /n tag ), idf cj � lg(n user /n j user + 1), where n j is the number of occurrences of the word with the fewest occurrences in the pattern c j , n tag is the total number of words for the user, n user is the number of users, and n j user is the number of users including the pattern c j . Inspired by cosine similarity [31], the similarity calculation is shown in Algorithm 1.
e degree of deformation of the deformable interest model is User interest gained by using DIM is not only more comprehensive but also more interpretable in describing the interactions of deformable interests.

Experiments
To verify the validity of the user interest mining algorithm based on the deformable interest model, this section will show the following three experiments:   (3) deleting words whose document frequency is less than 5.
(1) Flickr [32]. e dataset is based on the Flickr website (http://www.Flickr.com) and contains 354,531 pieces of personal information and 2,222,379 image annotations which were uploaded from October to December in 2012. ese users are from 20 interest categories.
(2) TWEETS. Based on Twitter (https://twitter.com), we selected 30 hashtags as interest categories and sampled 253,159 tweets for three consecutive months from July to September in 2009 under these hashtags. After preprocessing, we obtained 16,753 words.
(3) Instagram. e dataset is based on the Instagram website (http://www.instagram.com) and contains 163,479 pieces of personal information and 1,048,575 image annotations which were uploaded from October to December in 2016. ese users are from 20 interest categories. ree datasets are divided into two parts for training and testing. Among them, 10% of the annotated words for each user in the third month are considered as test data, and the rest are considered as training data. Table 1 gives a summary of the dataset.

Metrics.
We use the following indicators to evaluate the model: (1) Perplexity [2]. For quantitatively comparing multiple models with different hypotheses and inference mechanisms, perplexity of the pattern to which per-word on the test dataset belongs was computed, which is defined as where U test represents the users of the test dataset, N represents the number of users on the test dataset, c u i represents the set of patterns of user u i , p(c u i ) represents the generation probability of the pattern of user u i based on the proposed model, and N u i represents the total number of the patterns of user u i . e smaller the perplexity is, the higher the likelihood estimation and the better the performance of the model is.
(2) Classification Accuracy [29]. One of the purposes of user interest modeling is to get the proportion of topics for each document, which provides a potential semantic representation of the user's interests. is indicator is intended to determine the accuracy and discriminability of the latent semantic representation of user interest. e classification accuracy is defined as follows: where U test is the users of the test dataset, N represents the number of users on the test dataset, d is the document composed of some user's annotated words, I is an indicator function, C d is the user's actual interest category, and P d is the predicted user category. e larger the value of accuracy, the more accurate the potential semantic representation of user interest generated by the model and the better the performance of the model.
(3) Normalized Mutual Information [33]. Perplexity is a commonly used metric for user interest models, but it does not directly measure the semantic consistency of learning user interest. erefore, in order to further evaluate the quality of user interest generated by models, we used another evaluation metric, normalized mutual information (NMI), which is used to assess how well the predicted interest matches the actual interest. e definition of NMI is as follows: where C is the actual user interest set, P is the predicted user interest set, H(C) and H(P) are the entropy of the random variable, and I(C, P) is the mutual information between C Data: Similarity between A and B sim(A, B) Procedure: ALGORITHM 1: e algorithm of calculating similarity between interests. 6 Mathematical Problems in Engineering and P. e value of NMI is between 0 and 1. e closer to 1, the more consistent the predicted interest and actual interest, on the contrary, the more independent the predicted interest and the actual interest. (1) Parameter setting for frequent pattern mining: min s up, which represents the minimum support. We set the minimum support min s up ∈ 0.01%, 0.05%, 0.1%, 1%, 3%

Results and Analysis
{ } [26] and then select the best performance by cross-validation.
(2) Parameter setting of interest models: based on different datasets, for LDA and LDA-U, we consider annotated words of each user in three months as a document in the DTM, TTM, PGMult, UCIT-L, and ITM, we set time slice as [U train , U train , U train ] which represents that the same users added annotations to the uploaded image based on their own interests in three months. e above five models all set the number of interests from 5 to 185. At the same time, the Gibbs sample of 1,000 iterations runs 10 times, and the average is calculated. e Dirichlet prior parameter of the "user-interest" distribution is set as α � K/50, where K is the number of interests, and the Dirichlet prior parameter of the "interest-frequent pattern" distribution is set as β � 0.01. Figure 3 shows the perplexity of Flickr, TWEETS, and Instagram for different topic numbers. It can be seen that the perplexity of ITM is always lower than LDA, LDA-U, DTM, TTM, PGMult, and UCIT-L models. In addition, since the distribution of topics in short texts is very sparse, perplexity does not increase as the number of interests increases for each model. is not only proves that mapping short texts to the frequent pattern space is effective but also proves that time series models are superior to static topic models in describing user interests over a longer period of time.
en, we use the classification accuracy to evaluate the effectiveness of different models to describe the user's longterm interest. It can be observed from Table 2 that the classification accuracy of ITM is always higher than that of LDA, LDA-U, DTM, TTM, PGMult, and UCIT-L on different datasets. Especially for Flickr and Instagram, since each document consists of annotation words that includes multiple topics over a long period of time, ITM which represents each document with frequent patterns can fit the classifier well. LDA cannot solve the problem of sparsity and interest evolution. DTM and TTM cannot solve the problem of sparsity. Compared with representing each document with frequent patterns to alleviate the sparsity problem in ITM, clustering users based on short-text streams of multiple time periods of users and their followers to solve the sparsity problem in UCIT-L is not effective. PGMult cannot solve the problem of interest evolution. erefore, they are not as good as ITM in describing users' long-term interests.
We also use NMI to evaluate the semantic consistency between user interest generated by different models and the actual interests of users. From Table 2, we can see that the NMI is generally lower. However, ITM is again significantly better than other models. At the same time, the NMI of LDA, DTM, TTM, and LDA-U on TWEETS is very low, indicating that these three models cannot model good topic representations for short-text tweets. From the NMI of each model on Flickr and Instagram, PGMult does not perform as well as ITM in finding the semantic connection of discrete data. e NMI of UCIT-L on three datasets is lower than ours, demonstrating that if the user's interest is not closely related to the followers' interest, UCIT-L's performance in mining long-term interest based on short texts is limited.    (1) DIM: for DIM, we set the initial deformation coefficient k 1 � 1, k 2 � 0, k 3 � 0 and the initial bias b � 0. e number of interests is set to be the same as the other four models. Figure 4 shows the perplexity of Flickr, TWEETS, and Instagram for different topic numbers. It can be seen that the perplexity of DIM is always lower than LDA, LDA-U, DTM, TTM, PGMult, and UCIT-L. It can be observed from Table 3 that the classification accuracy of DIM is always higher than that of LDA, LDA-U, DTM, TTM, PGMult, and UCIT-L on different datasets. It can be observed from Table 3 that, from the perspective of NMI, the performance of DIM is also significantly better than other models. User interest consists of user long-term interest and situational interest. Situational interest is considered to be triggered by certain conditions or stimuli in the environment. It is a relatively passive and short-lived emotional state, so situational interest is not only the short-term interest of users but also includes the current hot Mathematical Problems in Engineering topics. Models such as LDA, LDA-U, DTM, TTM, PGMult, and UCIT-L default user interest are not affected by the environment, and users' long-term interest is regarded as user interests. DIM combines users' long-term interest, users' short-term interest, and current hot topics. One of the qualities that social networks attract users is the sharing and exchange of information. is trait determines the environment greatly affects the evolution of user interest. From Tables 2 and 3, on the TWEETS, the classification accuracy of DIM is greatly improved compared with ITM, while on the other two datasets, the classification accuracy of DIM is almost unchanged compared with ITM.
is is because the content of TWEETS is more susceptible to situational interests [34], especially social hot topics, and Flickr and Instagram have gathered more photographers, and their interests are relatively stable. Table 4 shows the accuracy of ITM which describes long-term interest, LDA-FP which describes short-term interest, KB which describes the social hot topic, and DIM on different datasets. It can be seen that, on TWEETS, the tweets sent by users follow social hot topics. erefore, DIM is a general-purpose model that describes user interests, and user interest using DIM mining is more in line with real user interest.
From Figures 4 and 5, it can be seen that although DIM brings a slightly higher time cost, perplexity is significantly reduced. When the model performed best (T � 185), the time spent on DIM is also no more than three minutes. With the development of industrial technology, the computing power of computers has increased rapidly, and this time is acceptable.

Results and Analysis of Adaptive Update Performance
Based on the Deformable Interest Model. In the third experiment, adaptively adjusting the degree of deformation of subinterests based on the deformable interest model is carried out on three datasets, and the experimental results are compared with those of static-DIM. Subinterests of two models are produced in the same way. e DIM experimental parameter setting is the same as that in the second experiment. e remaining experimental parameters are set as follows: (1) Static-DIM: the model fixed interest weights and sets long-term interest weight as k 1 � 0.7, short-term interest weight as k 2 � 0.2, and hot topic weight as k 3 � 0.1 based on experience Figure 6 shows the perplexity of Flickr, TWEETS, and Instagram for different topic numbers. It can be seen that the perplexity of DIM is always lower than static-DIM. It can be   observed from Table 5 that the classification accuracy of DIM is always higher than that of static-DIM on different datasets. It can be observed from Table 5 that, from the perspective of NMI, the performance of DIM is also significantly better than static-DIM. We can conclude that the status of the three subinterests is equal, which means that their influence is not static. It is not difficult to find out that the user will take photos for social hot topics and even generate new interests. e fixed interests can not only correctly describe the relationship between subinterests and real interest and subinterests and subinterests but also indicate user interest can only be changed slightly and impossible to update, which is obviously not realistic. In general, our experimental results demonstrated that the perplexity is reduced to 0.378, the average accuracy is increased to 94%, and the average NMI is increased to 0.20,

Conclusion
In this paper, we propose a novel method based on the deformable interest model (DIM) for modeling the evolution of user interest in dynamic social networks. We introduce the time factor and leverage sophisticated interest tracking model (ITM) which is based on two-layer Bayesian model to describe dynamic user long-term preferences. Compared with traditional models, it not only describes the evolution process of the user's long-term interest without increasing the number of the latent variables but also maps annotated words to the frequent pattern space to solve the sparsity problem of short texts. We then obtain user interest which combines long-term interest and situational interest by DIM. Compared with traditional models, DIM proposes an objective function which not only fully considers the composition of the user's real interests but also adaptively updates the influence of long-term interest, short-term interest, and hot topics on user interest. We evaluated the performance of the proposed model in terms of perplexity, accuracy, and NMI and made comparisons with state-ofthe-art models. e experimental results demonstrate the effectiveness of the introduced model. is enlightens us that the model can be applied in the field of image retrieval or e-commerce so that users can quickly find pictures or commodities that match their interests. It can also be used in social networks to present users with information streams that match their interests.
In future work, we intend to use the deformable interest model (DIM) to annotate areas in the image which the user is interested in. Like most previous works, how to calculate the cross-modal similarity between images and text is also a challenge. erefore, our future work is to study this problem by extending the model proposed in this paper.

Data Availability
e data used to support the findings of this study are included within the article.

Conflicts of Interest
e authors declare that they have no conflicts of interest.