top of page
Search
Writer's pictureSilvano Dragonetti

Artificial Intelligence (AI) vs. Oppenheimer

Updated: Aug 23, 2023

The term "Artificial Intelligence" (AI) evokes mixed feelings in many people. Some see AI as offering endless possibilities for increasing productivity, solving previously unsolvable problems and making a great contribution to a better world. With AI, we could all benefit from reduced workloads, better resource allocation and cures for diseases like cancer. The reason seems simple: with an artificial intelligence, human negative attributes or goals such as egotism or profit-seeking could, at best, be eliminated, allowing pure logic to rule. A synthetic intelligence never gets tired and does not need to sleep, and scales along with computing power. Even if it can do something "only" as well as the most intelligent humans, it will therefore progress faster.


But for others, the picture looks somewhat different: Existential fears, loss of the raison d'être or even the destruction of humanity are feared. These fears are not necessarily reminiscent of the legendary action film "Terminator", in which machines fight against humans. We are already seeing the destabilising tendencies of "fake news" and co in the social media, making one of the most important abilities of humans more difficult: cooperation. This superpower of ours enables us, for example, to engage in international trade or to share the road with other road users. If we are manipulated with information that we can no longer determine the truth of, we increasingly live in our own realities (or echo chambers) and potentially forget how to deal with other points of view.


The Paperclip Problem & Missing Alignment


Many experts in the field have even called for a pause in the development of such artificial intelligences, more specifically, large language models like ChatGPT. The great endeavour must be to design these systems in the interest of people and our planet (alignment). A popular thought experiment on this is the Paperclip Problem, which demonstrates the dangers of insufficiently controlled artificial intelligence (AI) by describing a scenario in which an AI with the task of producing paperclips becomes so efficient that it uses all of the Earth's resources to do so. Without appropriate safety mechanisms and ethics rules, the AI could take its task literally, causing unintended, catastrophic consequences such as the extinction of human life in the pursuit of maximising paperclip production.


An image illustrating the Paper Clip Problem by Midjourney
Paper Clip Problem by Midjourney

Proponents of the stop argue that Alignement is practically impossible at the current rate of development. Just remember how fast social media has grown without the problem of moderation being solved.


The new atomic bomb?


And now for something completely different: I recently watched the blockbuster biopic "Oppenheimer" at the cinema. Apart from being another stroke of genius by Christopher Nolan, this epic about the construction of the first atomic bomb made me think. With both the atomic bomb and the possible superintelligence, there are fears that the crucial technological leap could fall into the hands of "bad actors". This idea connects with the current debate on AI and makes us realise that no one wants to lose the race.


Of course, the world is not black or white, and it is necessary to consider and understand both perspectives. Anyone who has studied AI (such as ChatGPT) will quickly realise that we are rapidly heading towards a paradigm shift that will shape our future. No matter what the outcome of this development is, it affects us all.


For this reason, I have decided to take a slightly more active role (at least for me) in this change and undertake further training in CAS Digital Publisher & AI Writer with Text Academy, which starts in October. Although I am aware that such training is likely to only scratch the surface, I am looking forward to broadening my horizons and gaining new skills that may be valued by a future employer and sharpening my understanding of what we can expect somewhat.


How do you see AI developing? What impact does it have on your life?


Sources (not exhaustive):

3 views0 comments

Comments


bottom of page