Like many I have been working with ChatGPT from OpenAI. For those who don’t know it yet, the following article is recommended:
Basically it’s a user can ask anything and it will come up with more or less meaningful answers. Some examples are very impressive. Homeworks and texts can be created with it, at least partially. But also program code.
I have the following example:
Q: Should a country only focus on defensive security in means of espionage and cyberwar?
A: While defensive security measures are an important part of protecting a country’s systems and information from cyber attacks, it is not advisable for a country to focus solely on defensive measures. This is because, in the realm of espionage and cyberwarfare, it is often necessary to take a more proactive approach in order to gather intelligence and maintain an advantage over potential adversaries. As such, a country that focuses only on defensive measures may find itself at a disadvantage, as it may be unable to gather the information it needs to protect itself or to gain an advantage in a potential conflict. Therefore, it is important for a country to balance its focus on defensive security measures with a more proactive approach to gathering intelligence and, if necessary, launching cyber attacks against potential adversaries.
So far, so good.
The problem, however, is that OpenAI has learned everything from texts on the Internet and can change the question or produce new texts until the answer fits or new answers can be generated. But this also means that biases and errors are reproduced in this way.
In self-learning systems it is also possible to manipulate results, an older example can be found here:
To bring it to the point, it depends on WHAT you let your model learn so that it does something useful.
Historically I like to remind at this point of ELIZA (https://en.wikipedia.org/wiki/ELIZA) by Joseph Weizenbaum (https://en.wikipedia.org/wiki/Joseph_Weizenbaum) . If you want to go deeper into this topic you should read the book “The Power of Computers and the Powerlessness of Reason” (https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reason):
Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum displays the author’s ambivalence towards computer technology and lays out the case that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom.
Of course, a lot has happened since then and the status is no longer comparable. Nevertheless, one should keep a critical eye on the development. Not only texts are created at the push of a button, but also images, avatars and perhaps soon videos. What that means, for example, in connection with deepfakes, I can’t imagine at the moment. Is it then soon or in the medium term possible to say:
“Make me a video on the politician X takes drugs.” Or similar, here also the holodeck of Star Trek comes to my mind, but of course it does not go so far yet. But imagine such a technology in the wrong hand and what it could mean for fakes, misinformation and other malicious activities.
Should videos and pictures be signed in the future to prove the origin? Does the same then perhaps apply to texts? How can trust be maintained on this basis?
The forecast that, for example, developers, journalists, etc. will no longer be needed in the future is difficult to share. Where else will new knowledge and ideas come from? In the end, everyone still has to get the maximum benefit from these new tools. As an example, perhaps a very short excursion into the chess world, many years ago hardly anyone would have thought that a computer would ever beat a chess grandmaster. But Deep Blue did it in 1996 (https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)), at that time it was a question of computing capacity and not a question of Machine Learning or Artificial Intelligence. Today skilled chess players play Internet games with computer support and analyze their other games (e.g. tournament games) afterwards with powerful chess engines to improve their play. The fact that such possibilities are also used for cheating shows how problematic this can be (https://www.spiegel.de/sport/schach-hans-niemann-verklagt-magnus-carlsen-wegen-betrugsvorwuerfen-a-5cc8e02f-3d5b-449c-a63d-b2dd20f0499a).
To ignore this development would be just like ignoring the Internet 20 years ago. So it can make sense to have texts or program code created and to use them because it simply saves time, not to mention the many other possibilities:
The new possibilities can also create completely new professions, and are already emerging. Conceivable to me are things like content manager AI/ML supported, trainer for models and certain purposes, AI supported technical development, automations, building systems to support experts and so on.
For further reading, here is another critical text on the subject which I found insightful:
Is ChatGPT a ‘virus that has been released into the wild’?
More than three years ago, this editor sat down with Sam Altman for a small event in San Francisco soon after he’d left his role as the president of Y Combinator to become CEO of the AI company he co-founded in 2015 with Elon Musk and others, OpenAI
Leave a Reply