For some reason, the news today that ChatGPT teaches people how to create drugs, commit crimes, and do other nasty things, and that Europol requires careful monitoring of every use of this system did not surprise me.
Now, the European Union's law enforcement agency, Europol, has detailed of how the model can be misused for more nefarious purposes. In fact, people are already using it to carry out illegal activities, the cops claim.
"The impact these types of models might have on the work of law enforcement can already be anticipated," Europol stated in its report [PDF]. "Criminals are typically quick to exploit new technologies and were fast seen coming up with concrete criminal exploitations, providing the first practical examples mere weeks after the public release of ChatGPT."
Although ChatGPT is better at refusing to comply with input requests that are potentially harmful, users have found ways around OpenAI's content filter system. Some have made it spit out instructions on how to create a pipe bomb or crack cocaine, for example. Netizens can ask ChatGPT to learn about how to commit crimes and ask it for step-by-step guidance.
https://www.theregister.com/2023/03/28/chatgpt_europol_crime_report/Human nature is such that people will not use new technologies solely for good intentions; there will be those who will be interested in such things as using ChatGPT for theft or fraud.
ChatGPT was trained on a limited set of open source data, which means that it does not know anything that could not be found by searching the Internet, although it may take more time and effort to successfully search.
It is very early to rejoice in the creation of miracle systems.
Maybe early, maybe it's time. The strength of ChatGPT is that with a quantitative increase (relative to earlier models) in the number of parameters, a qualitative leap took place, and the language model learned to find non-trivial analogies and see patterns hidden from the naked superficial glance where they would seem to be absent. This is a great progress, you can even call it a revolution.
If we talk about the fact that people demand privacy on the Internet, then with the use of AI, very soon there will be complete control over everyone who tries to work with it. And accordingly, the freedom that everyone here dreams of, as it was not and will not be,
In a curious way, shortly after the explosive success of the open testing of ChatGPT, all sorts of interesting language models began to appear on the network that can be run locally on a regular home computer. For example, I can name the recent leak of the Facebook LLaMa language model, as well as the Alpaca language model (this is the same LLaMa, additionally trained by scientists from Stanford). There are other language models in the public domain, some of which can be run locally even on Raspberry Pi. I think soon anyone who wants to on a home computer or smartphone will have a local copy of the language model, retrained in accordance with your individual characteristics and preferences. And everyone will have a choice - to use their natural intelligence or use their natural intelligence, enhanced by a customized "cyber exoskeleton" that can multiply your productivity without losing quality. It's not an easy choice.
Returning to the topic of conversation, the development of generative artificial intelligence could lead to the complete automation of 25% of jobs in developed economies,
predicted Goldman Sachs. Bank analysts expect that artificial intelligence will be able to fully automate 300 million (or one-fifth) jobs in the US and Europe.