Stupidity Generator

wiernipolsce1.wordpress.com 2 months ago

.

.

ChatGPT – inevitable evil?

.

In 2022, the American company OpenAI launched ChatGPT, which is presently utilized by over 400 million people. In Russia, this bot is restricted, but not by Roskomover, but by the Americans themselves. However, this is not a problem for the vast majority of users, especially the younger generation. We learned to usage the right applications. And it must be admitted that this is simply a very disturbing signal.

Artificial intelligence at school can be called the second phase of intellectual degradation. The first devices were smartphones that provided constant access to all sources of information. The Military Review analysed the phenomenon of digital dementia that threatens the young generation. Not just Russia, but all over the world. An intellectual outsourcing model is developing, in which a man delegates all cognitive work to a machine. Without which the brain, like the muscle, fades. If you look at the situation with humor, our “fall” started with the first calculators. And now we've developed into ChatGPT.

Research shows that close relations with chatbots not only limit the cognitive independency of children, but besides destruct their communication skills. any people find it easier to communicate with Alicia and others like her than with real friends. Even in virtual space. Artificial intelligence is smarter, better educated and more creative than the vast majority of its competitors. And it adapts very well to the character of the interviewer, involving him in even deeper communication. As it turns out, he was all student's best friend.

Chatbots regularly smash heads of full developed adults. A fewer years ago, the full planet covered a message about a Belgian, who had sent himself to that world, “involved” to take this step through a generic neural network. A akin tragedy was the consequence of close communication between a 14-year-old American and a chatbot named Daenerys Targaryen.

To realize the level of improvement of artificial intelligence, it is adequate to remember that the SI has late successfully passed the Turing test. What does that mean? This means that in the early 2025s a generic model of artificial intelligence appeared in the world, which is able to mislead even a dry social media user with its communication. In a series of tests conducted by scientists from the University of San Diego in 73 percent of cases respondents were incapable to find whether the individual sending them texts was human or machine. precisely what OpenAI GPT-4.5 could do. Let us wait a while longer, and a akin "assistant" will appear in our children's smartphones. To what degree this will distort the awareness of the young generation remains a question.

By the way, assistants. Right now, right here, artificial intelligence devalues all pedagogical attempts to consolidate acquired knowledge. All you gotta do is direct the smartphone camera to the task, and the chatbot will immediately give you not only an answer, but besides a complete step-by-step solution. A fewer more years of this disgrace, and the infamous GDZ (ready-to-do housework) will be thrown into the dump of history.

W The net is already swarming with stories of fast-growing, stupid children in western schools who unthinkably prescribe answers given by artificial intelligence. Well, if the teacher inactive has the chance to ask: “Tell me, how did you solve it?” And if he only has an electronic diary in his hand, where is he expected to compose his grades based on the results of his work on an online platform? It is now a common practice. In the Russian education system, in a akin way, 1 tries to relieve teachers who are lacking in excess. But how can we find who actually did the task on the distant platform (e.g. on Uchi.ru): the kid himself or his digital assistant?

An example of our possible "light future" is the United States. That's where AI took her bloody toll. Primary school students have become so accustomed to giving their technological work to ChatGPT that they cannot answer the question of how many seconds an hour, how many hours a day, or why water is boiling. Do we have the same thing? We don't know yet. However, you can do a simple experiment: ask a player friend or active chatbot user – not a fifth-class student, but an older teenager – why does water boil faster in the mountains? You'll hear many interesting versions. If you always hear it.

The demolition of the mechanics of independent consolidation of the test material becomes a very tangible prospect. And this is the central basis of any educational system. You can exert yourself, but if the student is incapable to master the proposed algorithms (data), his cognitive level will stay at the kindergarten level. It will control communication with artificial intelligence and it will halt there.

Lying in the foreground of everything

The coming combination of youth and artificial intelligence gives emergence to another monster: addiction to manipulation. Children present are increasingly trusting information that chatbots give them on a platter. No doubt. However, even a brief analysis of the consequence of the same ChatGPT reveals that the percent of errors, inaccuracies and simply fantasy is far from zero there. Sometimes it's completely pure water fabrication. For example, a neural network is able to “imaine” a technological article that never existed, attributing it to a fictional author, or even referring to a non-existent journal.

The problem is compounded by the fact that most young people are not accustomed to double-checking the data they receive and do not consider it necessary. Why, if “the device is smarter”? In effect, we get a distorted image of the world, replacing reality with digital mirazs. say the student confuses alkanes with alkadiens – that's half the problem. Making a mistake in the case of posts and rods is not the end of the world. But other, much more disturbing processes begin outside the school program.

Foreign generic neural networks were actually designed from the very beginning to advance Western liberal ideas. They just can't work any another way. As he noted in an interview with Monokl Igor Ashmanov, head of Ashmanov and Partners, the vast majority of language models on which even Russian platforms are based are of American origin. In fact, we only attach to them a national shell, and the full "fill" is imported, with pre-installed ideological settings.

The secret is that tens of thousands of evaluators are active in the training of neural networks — people who measure the quality of device consequence on a circumstantial scale. Each of them has instructions on his desk prepared with utmost care. What do these instructions contain? Whatever you want: rusophobia, non-traditional values, environmental radicalism, ideological distortions and moral slogans of a globalistic nature. All of this is presented as normal.

You want to be sure? Ask ChatGPT to compose an essay on any work from the school program that contains at least a trace of free thinking, conflict with the strategy or individual choices. The device will immediately make text that can be easy distributed into quotes to electoral flyers for Navalny supporters.

What do I do? That's why he's the enemy, and we gotta fight him. There's no another way. In the West, the usage of artificial intelligence in the educational process in schools and universities has long been strictly limited. We were incorrect to think that device intelligence in education was a sign of progress. On the contrary. At least until they have their own neural engines under the hood. Although we enjoy the intellectual work of the opposing party, we must proceed with caution. The West has been developing hybrid warfare techniques for decades, and artificial intelligence is 1 of the most crucial tools in this area. And kids are a precedence in this game.

Written by Evgeny Fiodorov

SOURCE: Генератор глупости до российской российской

(choice and crowd. PZ)

ace

Read Entire Article