1Assistant. Industrial University of Tyumen, Russia, https://orcid.org/0000-0003-1292-2750

Introduction

The growth (in volume, dynamism, complexity, connectivity) of relationships in social networks (hereinafter referred to as social networks) requires the development of intelligent systems and mathematical models (forecasting) of fake news in social networks (see, for example, Tretyakov et. al., 2018; Golovatskaya, 2019). Without this, it is difficult to resist digital, media misinformation (Domagoj & Volarević, 2018; Allen at al., 2020). For example, the Google-request "fake news" produces more than a billion pages, and the request "fake news science article" - 361 million.

There is already a science of fake news (Lazer, Baum, Benkler et al., 2018), the problem of forecasting is all the more relevant, and the more possible it is to exchange "viral" and anonymous messages on social networks. Given the complexity, almost poorly implemented social network monitoring, identification of fake risks and actors.

Theoretical bases

Social media is part of our society, but can we trust them as a source of news, how to distinguish lies from news aggregators?

A lot, especially in social networks, may seem untrue. False information - news, stories or hoax for intentionally misinforming (deceiving) readers. Typically, such stories are created to influence people's opinions, advance the political agenda or cause confusion, and can often be profitable for online publishers. False information can mislead people by issuing as verified news or using names (addresses) similar to authoritative news aggregators (Sukhodolov & Bychkova, 2017).

"Fake news" or "fake" refers to news or other materials that are deliberately false, intended to manipulate the reader. Although the concept of fakes has existed for a long time, in recent years it has become a big problem due to the ease of its distribution in social networks, on online platforms.

It's important to recognize fake news, prevent its distribution, without transmitting it with its social network capabilities. Several social media platforms responded to the increase in the number of fakes by changing their news feeds, marking the news as false (contested) or using other approaches. Google has also made changes to address this issue.

Traditionally, we receive news from reliable sources, from journalists and the media, which are obliged to comply with strict rules. Many receive news from social networks, where it's often difficult to assess whether they deserve trust. Information overload and general lack of understanding by people of media letters also contributed to an increase in the number of fake news. Sites, social networks, blogs can play a big role in increasing fakes.

Social networks allowed anyone to potentially attract a wide audience. False information can be a profitable business, profitable from publishers' advertising, various media channels and viral marketing. The more clicks fake has, the more advertisers, sites earn. For many publishers, social networks are an ideal platform for increasing web traffic.

The spread of fakes leading to violence, unrest, resistance to the authorities, and hostility of groups can be equated with information weapons of mass destruction. After all, only the daily audience of Yandex. News exceeds 6 million readers. Fakes are able to provoke a semblance of a destructive "information terrorist attack" (Kaziev et al, 2017).

Unfortunately, so far there are no relevant measures against fakes, including legal, international cooperation (Vosoughi, Roy & Aral, 2018). Mechanisms are needed to filter "fake news" in the media, especially, abusing their distribution of media giants (Aral & Eckles, 2019). Here, work is also carried out using artificial intelligence systems and expert ones capable of assessing the measure of authenticity of the news distributed. For example, on Facebook. Google plans to "fake generators" access to the distribution of advertising revenues (AdSense), provide search results with the mark "FactCheck" (with fact checking). There are ideas for attracting crowdfunding sites, blockchain technologies to combat fakes.

How to recognize fake? Fakes are often framed too well, too extreme. The following "recognizers" of fake news are distinguished, which can be implemented simply.

1.Note the domain and URL: trusted sites have familiar names and standard extensions, such as .com or .edu. 2.Read the section "About Us": its structure and "excessive" vocabulary may indicate an unreliable source. 3. Take a look at the quotes: good stories quote several experts to get different points of view on the problem. 4.Look who said: can you check the correctness of the quotes, is the source authoritative? 5.Check the comments: in social networks they can warn you when the story does not match the title. 6.Reverse image search: if an image used in a news story appears on other sites (on other topics), this is a sign that it does not match the fake.

Social networks are fake distributors. Platforms such as Facebook and Twitter allow you to easily share current news without wasting time critically evaluating it. Readers themselves are not inclined to critically evaluate the news, spreading fake on social networks with great speed (Persily, 2017).

Methodology

The work uses methods of system analysis and synthesis, critical comparative analysis, mathematical modeling and others.

Methodologies, strategies to combat fakes are subject to the following principles.

1.Remember the problem: popular news aggregators and social networks compete for your attention, sometimes they can manipulate the viewer. 2.We think critically: we critically evaluate the news if it is too good (bad) to be true, probably it is. Most fakes are based on our desire to confirm our beliefs, positive or negative. 3.We compare facts with reliable sources. Find time to evaluate using authoritative sources, including media libraries. Although authoritative news sources themselves may sometimes be mistaken, they carefully check the facts to confirm their news. 4.We stop the spread of fakes: you can contribute to the non-proliferation of fakes further in social networks, by mail, etc.

The fight against fake news in social networks boils down to understanding the goals and platform of social networks. They make money, so often run ads tailored to our interests, search history. This is targeted advertising, including neuromarketing processes, methods.

It's important to know those who have business accounts in social networks. Knowing that news first passes through the filter of previously collected data, we can be more responsible. If you represent a social network business, a marketing platform, it's important that your messages correspond to the brand, create positive relations with customers.

Fake news on social networks seems inevitable. The best way to fight is to maintain a healthy curiosity for what is read in the tape, understand how platforms oversee content, and be "suspicious". Social networks are a powerful tool for business, including "private owners", if used consciously and competently.

Methods for identifying fake news are grouped by both news context content and social context content (Shu et al., 2021; Stella, Ferrara & Domenico, 2018). Fake news of the first class distinguish the quality of vocabulary (inconsistency, non-uniqueness, etc.), trust rating, age of the domain, etc. We identify the second class in context.

Figure 1 shows the distribution of television news consumption in comparison with computer news consumption (age 18-55). The y-axis is the average daily news consumption on the computer, the x-axis is the average news consumption on the desktop per day (Feldman, 2007). Shows a monthly picture of groups corresponding to different web news consumption ranges. The average consumption of TV news and the size of the group (percentages of all participants in the discussion) are estimated.

figura1

Fig. 1. Picture of television news consumption versus desktop news consumption by random samples (Feldman, 2007)

Results

Many fakes that are dangerous from the point of view of public influence are transmitted from one visitor to another by various mechanisms. If you generalize, distract from secondary parts, consider systemically, then you can limit yourself to two main mechanisms - horizontal and vertical transmission. With horizontal exposure to susceptible, it is carried out by intergroup (direct or indirect) contact with "contagious" fake accounts. Vertical impact - direct, intra-group.

It's assumed that all horizontal transmissions occur among adults, there is a T period during which new users do not participate in the horizontal transfer of fake (waiting period).

We need functions that specify the number of distributors, the number of susceptibilities at time t and evaluated at time t-T. We assume that spreading and sensitive are removed at a constant rate R and r, respectively, and adolescents with a frequency G and g, respectively. The model does not allow remote distributors to participate in the reproduction or transmission of fakes. We assume that adults susceptible have a constant birth rate b, and distributors have a birth rate with a coefficient B.

The vertical mechanism is introduced into the model by assuming that their proportion is equal to p (susceptible) and q=1-p (distributors), respectively. The growth of susceptibilities is given by

formula1

and distributors - as

formula2

Where

formula3

formula4

Dynamic equations of the model - with delays:

formula5

The above model can be considered as a model in which S(t), I(t), respectively, the number of susceptible and distributors. The model retained the possibility that r is not equal to R and b is not equal to B. It can be assumed that r=R. Fake at first usually does not reduce its performance, but suppose (it’s natural) that after several fake cycles this happens. Therefore, it’s interesting to consider both cases: b=B, b>B. The only nonlinearity in the model is related to the transmission time kS(t)I(t). For one cycle, you can adjust, parametrically adapt the model. In the special case, at T=0, the model is reduced to the Cauchy problem for ordinary differential equations:

formula6

The results on the qualitative behavior of the system described by the model are important. Consider isolated equilibrium solutions:

formula7

when R is not equal to B. Solution (𝑆∗; 𝐼∗) is of interest only when

formula8

Similarly (Busenberg, Cooke, & Pozio, 1983) we break the results into 5 cases at p>0, k>0 and, depending on the values of the parameters, we formulate statements. Theorem 1. Case 1. If 𝑟 < 𝑏, 𝑅 > 𝐵, then (𝑆∗; 𝐼∗) is feasible and globally stable with respect to all solutions with initial condition 𝐼(0) > 0. Case 2. If 𝑟 ≥ 𝑏, 𝑅 ≥ 𝐵 and one of them is not equal to the other, then 𝐼(𝑡) → 0, at 𝑡 → ∞, 𝑆 (𝑡) → 0, if 𝑏 < 𝑟 and positive. Case 3. If 𝑟 = 𝑏, 𝑅 = 𝐵 and 𝑃(0) = 𝑆(0) + 𝐼(0), then 𝑃(𝑡) = 𝑃(0 for all t and 𝐼(𝑡) → 0, if 𝑝𝐵/𝑘 > 𝑃(0) or 𝐼(0) = 0, since 𝐼(𝑡) → 𝑃(0) − 𝑝𝐵/𝑘, since 𝑝𝐵/𝑘 ≤ 𝑃(0) and 𝐼(0) > 0. Case 4. If 𝑟 < 𝑏, 𝑅 ≤ 𝐵 or 𝑟 ≥ 𝑏, 𝑅 ≤ 𝑞𝐵 < 𝐵, then 𝑆(𝑡) → 𝑝𝐵/𝑘 and 𝐼(𝑡) → ∞ (except S = I = 0). Case 5. If 𝑟 > 𝑏, 𝑞𝐵 < 𝑅 < 𝐵, then (𝑆∗; 𝐼∗) is a saddle point and there is a separatrix. For initial conditions below the separatrix, 𝑆(𝑡) → 0, 𝐼(𝑡) → 0, then for conditions above the separatrix we have 𝐼(𝑡) → ∞, 𝐼(𝑡)/𝑃(𝑡) → 1, 𝑡 → ∞.

Discussion

It follows from the theorem that: No periodic decisions; 1)there are two cases where the population is regulated to a constant size - case 1 (removal of susceptible below their activation, the opposite is true for infection) and there is a stable level of fake infection, which occurs regardless of the initial conditions S(0), I(0) and k, q, and there is no threshold or minimum community size necessary for the disease to become "endemic" (characteristic in the community); 2)in case 3, the total number in the community remains constant, there is a threshold condition for maintaining fakes; 3)in case 4, the community "explodes", as in case 5, when the thresholds S(0), I(0) are exceeded; 4)in case 5, when the rate of vertical propagation of fakes q increases, separatris drops, it becomes more likely that the threshold is exceeded. In the first approximation, we can take 𝑏 = 𝑟, 𝐵 = 𝑅 (case 3 is applicable). Threshold condition

𝑃(0) ≥ (1 − 𝑞)𝐵/𝑘, where q is vertical transmission rate and prevalence 1 − (1 − 𝑞)𝐵/𝑘𝑃(0). On the other hand, when averaging over several cycles, the R value can exceed the r value. Cases 1-5 are possible with R>r and even with b=B, except case 5. For example, in case 1, if 𝑟 < 𝑏 = 𝐵 < 𝑅, we see that process persists. The system has a single solution, like a sequence of differential equations with certain initial conditions.

Conclusion

People often tend to uncritically evaluate the news that their friends share (albeit virtual, from a community, group) or confirming their beliefs (albeit incorrect). Fakes contributed to important political, economic and social events of recent times. Especially the accompanying pandemic COVID-19.

In Russia, systematic research is not carried out on fake news in social networks, the blogosphere. The study is useful for analyzing and predicting, identifying and neutralizing fake news generators in social networks.

Episodes of fake infection can be described by more complex mathematical models and mechanisms. It is necessary to strengthen modeling, forecasting fake infection in the social network, media environment in order to counter it.

In the model considered, the prevalence of r has very simple behavior, it is necessary to investigate more complex models that will help to develop relevant measures to prevent the spread of fakes and neutralize the damage from this. The media literacy of the readership should also be relevant to this.