Ethical Use of Personal Attributes in AI Technology

Artificial Intelligence collects data from multiple resources in order to create new content. Some of the data may be collected without informed consent. This is in question in the case of a recently released tool by OpenAI’s ChatGPT. “Sky”, an AI generated voice, was found to mimic the voice of actress Scarlett Johansson. Johansson was approached by OpenAI’s CEO, Sam Altman, to contribute her voice as one of the voice models for ChatGPT. Johansson declined. But, after the release of Sky, the actress was notified by family and friends about the resemblance of the voice to hers. This has resulted in the suspension of Sky and legal action taken by Johansson.

Ethical issues in the use of AI include potential for harm, bias, discrimination, anonymity, confidentiality, voluntary participation and consent.

AI systems have the ability to collect and utilize personal information and this could pose a privacy risk. The algorithms used by AI rely on huge amounts of data that could include sensitive personal details. Are AI models transparent with users that interact with this technology? Do individuals have any control over their own data? Could their data be used to project bias or misinformation?

AI voice cloning has already been used for evil purposes. Scammers only need a few seconds of a person’s voice to create a realistic simulation. The ‘Grandparent Scam’ involves a person calling a family member and claiming to be in distress. They will need money wired to make bail, or get a tow truck and ride share due to an accident they were in.

The prevalence of social media videos and making personal information public on the web, feeds into this type of activity. Protect your accounts and limit the amount of information you share on these platforms.

Resources:
Scarlett Johansson’s ChatGPT Face-off Confirms Our Fears About AI