By Andreas Charalambous and Omiros Pissarides

With the advent of ChatGPT in November 2022, the discussions around artificial intelligence have intensified. Significant benefits appear to be in the pipeline in terms of productivity, the underlying impact of entrepreneurship and for the creation of quality jobs. At the same time, strong concerns are being raised that revolve around, among other matters, possible adverse effects on employment, social cohesion and also human rights.

The aim of this article is to put forward some preliminary reflections on a complex subject, with many unknown aspects, as food for thought.

At the outset, let us examine how ChatGPT differs from Google, which is already used widely and extensively. Google merely offers links and access to existing, and largely unfiltered, information. ChatGPT, on the other hand, is an advanced system that uses AI to produce coherent and understandable content in the form of answers to user queries.

Such AI-based models are clearly superior to humans in the areas of writing text and creating images and sounds. These models have the ability to learn, benefit from accumulated knowledge and improve continuously. Given that their use is quickly growing, there are fears that this will lead to a reduction in jobs, especially in medium-skilled occupations. However, experience to date suggests that technological breakthroughs lead, at the same time, to the creation of new jobs.

Furthermore, many analysts point to the risk that AI will lead to wider inequality, with adverse social and political repercussions.

The most pressing issues, however, concern the spread of false information and the implications for privacy and human rights, which ChatGPT contributes to at an extent which is far greater than previous technologies.

Due to its nature, ChatGPT is expected to significantly affect countries that rely on the services sectors, such as Cyprus.

Preventing technological progress through bans, which is recommended by some analysts, is not a realistic option. Simply stated, the evolution of history cannot be halted. It is, however, necessary to develop a strategy at a multinational level, which should include the following:

(a) Development of research and social dialogue to elucidate the effects of AI, supported by scientific advice from a special interdisciplinary group, which would also be charged with the task of establishing an effective evaluation mechanism.

(b) Adoption of a regulatory framework that would aim at setting high standards for the protection of privacy and individual rights, and at providing sufficiently deterrent penalties for the spread of fake news. The European Parliament is already making moves in this direction.

(c) Promotion of international cooperation, which would focus on the spread of knowledge and aim to facilitate awareness for the responsible use of AI.

(d) More education and training programmes on the utilisation of AI, both in the workplace and in education institutions, targeting a paradigm shift from plain memorisation to critical thinking.

The role of private initiative in the development of AI, but also of the public sector in the creation of a regulatory framework, are of pivotal importance.

Andreas Charalambous and Omiros Pissarides are economists