Study spotlights AI’s role in evaluating job prestige, value

Artificial Intelligence (AI). PHOTO; FORBES

New research on how artificial intelligence (AI) evaluates the prestige and social value of occupations has been unveiled.The study, published by the International Labour Organisation (ILO), shed light on the potential risks of using such methods for sociological and occupational research.

The paper, ‘A Technological Construction of Society: Comparing GPT-4 and Human Respondents for Occupational Evaluation’, compares evaluations of occupations made by GPT-4 (a type of Large Language Model (LLM) AI that can recognise and generate text) with those of a high-quality survey.

Co-authored by the ILO’s Paweł Gmyrek, Christoph Lutz, Norwegian Business School, and Gemma Newlands, Oxford Internet Institute, The Guardian gathered that the occupational evaluation captures people’s perceptions of occupations in society, whereby the researchers used the most universally applicable occupational classification – ILO’s International Standard Classification of Occupations (ISCO-08) – to organise jobs into clearly defined groups according to their tasks and duties.

The human ratings were subsequently compared with such algorithmic views to understand how closely the AI system was able to predict human opinions, and whether its way of perceiving human views aligned with particular demographic groups.

The Guardian gathered that the “algorithmic understanding” of general human opinions could potentially allow AI to be used for occupational research, with benefits including efficiency, cost-effectiveness, speed, and accuracy in capturing general tendencies.


It stated that the utility and accuracy of AI tools in evaluating a range of job occupations, in comparison to human assessments, has potential benefits such as efficiency, cost-effectiveness and speed, but some obstacles need to be overcome.

However, the study underestimated, in comparison to the human evaluators, the prestige and social value given to some illicit or traditionally stigmatised occupations.

The paper cautions that current LLMs tend to primarily reflect the opinions of Western, educated, industrialised, rich and democratic (WEIRD) populations, which are a global demographic minority, but which have produced the majority of data on which such AI models have been trained.

Therefore, it said while they could be a helpful complementary research tool – for example, in processing large amounts of unstructured text, voice and image inputs – they carry a serious risk of omitting the views of demographic minorities or vulnerable groups.

The researchers argued that these limitations should be carefully considered in applying AI systems to the world of work, for example when providing career advice or conducting algorithmic performance evaluations.

Author

Don't Miss