USD
41.28 UAH ▼0.05%
EUR
43.65 UAH ▼1.91%
GBP
52.33 UAH ▼2.16%
PLN
10.11 UAH ▼1.84%
CZK
1.73 UAH ▼2.17%
Scientists believe that AI should not be too smart, and offer to control neural ...

Russia and China can develop AI to spread propaganda: how to interfere

Scientists believe that AI should not be too smart, and offer to control neural networks with chips. A group of researchers from Openai, Stanford and Georgetown universities believe that large language models, like Chatgpt, can be used for misinformation on social networks and more.

According to scientists, generative language models are available to everyone today, and therefore researchers fear that such tools, if they find themselves in the hands of propagandists, are at risk of becoming a tool of operations of influencing in the future, writes Vice. com. According to scientists, propagandists will be able to resort to cheaper and more efficient tactics thanks to Shi-tools.

It will no longer be necessary to pay for the work of the army of trolls, because with the help of neurotrons it will be possible to generate convincing texts in large quantities, to send them on social networks, to promote through the media on the Internet. Despite the fact that language models are controlled through APIs, in countries such as China and Russia, there will be enough money to invest in their own developments.

In addition to posts, propagandists will be able to use their own chatbots. In their report, they mention how the chatbot helped to spill on people so that they are vaccinated in the midst of the Covid-19 pandemic. Researchers believe that governments need to impose restrictions on collection of training data and create tools for control of access to AI equipment, such as semiconductors. An example is the restriction of export control introduced by the United States against China.

Without accessing some equipment and technologies, the PRC will not be able to create the latest generation chips, which will significantly slow down this country in her attempt to "pump" neural networks. Scientists propose to restrict access to future SI models, increase their safety levels to avoid breakage and attacks.