HSM

Importance of maintaining the human touch when incorporating AI

Technology has quickly become an indispensable commodity of our lives. Following the invention of internet and smartphones, ChatGPT has become the leading representative of disruptive innovations this decade.

In essence, ChatGPT is a generative chatbot powered by artificial intelligence (AI) that can learn, process, and resolve complex questions through its neural networking model. Interestingly, recent news demonstrated that ChatGPT is able to pass both a Wharton MBA and US medical licensing exam.  

Thanks to its ground-breaking capabilities in deep learning and generating conversational content, organisations have started applying ChatGPT to augment, if not replace, human on business activities such as writing, performing research, and driving customer engagement. Considering these transformative functions and benefits, tech giants like Google and Microsoft have joined Open AI, the owner of ChatGPT, in the competition by launching similar AI chatbots in early 2023. 

Despite the increasing prevalence on AI chatbots, it does not come without limitations in its adoption at the workplace: 

Inaccurate information may undermine knowledge flow and job performance – A recent article from The Financial Times mentioned that ChatGPT tends to provide “plausible” answers, rather than truth, to users’ questions. An Australian economist posted a question on “What is the most cited economic paper of all time?”. ChatGPT responded with “A Theory of Economic History”, which is a paper that does not exist but invented based on heavily cited words. In theory, the neural network of ChatGPT and similar AI tools operate with large language algorithms. These algorithms encode, rather than quote directly, the information. Therefore, the output is a compressed and processed version of the original information that may increase blurriness and biases. Over reliance on such inaccurate outputs without human judgement could undermine knowledge flow and affect job performance in the long run.  

Imperfect algorithms may create unethical working cultures – With an aim to maximise the accuracy and precision of its output, ChatGPT is designed to encode a tremendous amount of information to improve its algorithmic performance. In an early stage of development, the AI chatbot requires humans to help filter the so-called toxic data. It looks reasonable at first glance, but, Open AI was reported to have had to recruit people to screen out harmful text and imagery. This prolonged and repeated exposure to disturbing content could subject people to higher risk of psychological dysfunction and trauma development.  

Increasing investment may induce people’s fear of job loss – A recent survey demonstrated that millennials, compared to other age groups, are 43% more likely to worry that their job will be replaced by ChatGPT. It is likely that AI could take over certain tasks from the human workforce. Nevertheless, they are susceptible to noticeable mistakes such as misleading content and plagiarism. These errors induced by the flawed algorithms could also happen in the AI-enabled technology adopted in different industries. Such deficiencies could make organisations cautious about replacing people’s jobs with technology. On the other hand, it is also found that customers crave more for social interaction with human than computerised service representatives. Given the supervisory role and authenticity of humans, AI technology may be more likely to replace tasks than jobs. People whose tasks being augmented by technology can be reskilled or upskilled to manage the tools and take on higher-level tasks.   

You can read more about Microsoft’s AI powered search tool here