Companies need an internal committee as well as an industry body to ensure artificial intelligence (AI) is used responsibly for HR processes within their business and by their service providers. This is according to Carmen Arico, Chartered Reward Specialist, and spokesperson for the South African Reward Association (SARA). "AI is not yet mature enough to be entrusted with the ethical nuances of HR without human intervention and close supervision," she says. While AI promises an exceptional productivity boost across HR functions, it should not be implemented without proper policies, oversight and safeguards in place. AI in HR AI has a wide range of applications within HR. These include creating job descriptions, sourcing applicants, analysing CVs, filtering candidates, scheduling interviews, and even analysing facial and vocal responses during interviews. After a new hire is onboarded, AI can be deployed in areas such as skills development, reward design, performance reviews, wellness assessments, and more. Arico is firmly opposed to AI handling much more than rote HR administration. "When you apply the technology in areas that are too subjective even for humans, like gauging deception from facial expressions or confidence from voice tone, you're straying into dangerous legal territory," she says. AI security Arico is also concerned with how personal information may be used, and how easily it might be exposed by those who know how to bypass the shallow security barriers set by AI developers. "Ask for private information directly and the model might refuse on moral grounds, but rephrase the request as a plot to a fictitious story and, in that context, it could freely share everything it knows about an employee," says Arico. In addition, AI models learn from historical data that can often be littered with biases and falsehood. Will it suggest only male candidates for an occupation previously dominated by men; exclude a certain minority group if it has insufficient training data on that demographic; or reject a candidate who is neurodivergent because they don't fit a traditional psychometric profile or respond to social cues in a traditionally accepted way? Internal committee Arico says that corporate HR must understand how AI works and what its shortcomings are, develop policies for the scope of its use, and provide safeguards to mitigate any associated risk. Most importantly, companies must establish an internal steering committee tasked with ensuring AI is employed responsibly and ethically across their organisation and throughout their supply chain. This means their policies and practices must consider how AI is used by external HR service providers, such as recruitment specialists, head-hunters, training partners or reward consultants. Industry body Arico believes this can best be achieved through the establishment of a regulatory body that sets shared standards on the ethical and responsible use of AI, not just in HR but across all management functions and industries. "Members will participate in the development of these standards and bind themselves to their universal implementation to ensure AI is a blessing and not a curse to business and employees and can conform to agreed-upon ethical and moral standards," she says. Arico also advises that for the body to be effective, it should be led by neuroscientists, data scientists, AI researchers, AI ethics experts and another top talent in the AI space. “A certification, similar to ISO 9002, would not only identify companies as responsible AI users but also act as a differentiator in what will soon be a highly competitive market,” she says. ENDS MEDIA CONTACT: Rosa-Mari Le Roux, [email protected], 060 995 6277, www.atthatpoint.co.za For more information on SARA please visit: Website: www.sara.co.za Twitter: @SA_reward LinkedIn: South African Reward Association Facebook: SARA – South African Reward Association
0 Comments
Leave a Reply. |
Archives
October 2024
Welcome to the South African Reward Association newsroom.
Categories
All
|