|

Ethics in AI

There is something that I don’t talk much about: ethics in AI. Fortunately, it has not been much of a problem largely because I am not using AI to write for me.

But…

I did encounter an instance where there was a webinar to show how to use ChatGPT to gather contact information in virtual networking meetings, most commonly in zoom calls.

When I first read about that topic, I thought, “No, tell me that isn’t what I think it means.”

Unfortunately, the webinar was about using ChatGPT to sift through chat texts to gather contact information and whatever else you think of. At the end of the webinar, I realized no one was going to bring up the elephant in the room: putting in personal private data, even if emails and LinkedIn profiles, through ChatGPT. It was uncomfortable for me, but I did say that I know of people who do not want those chat texts run through generative AI due to concerns about privacy and emails. So, could the presenter speak to that?

The Debate

Thus ensued some conversation around whether it was okay to use put people’s emails and LinkedIn through generative AI. I don’t know where we landed but here are some things that people mentioned:

  • LinkedIn information is already public, so it’s okay (hmmm, I don’t think LinkedIn regards that as okay)
  • I believe someone said that LinkedIn does not allow AI to scrape their site, so LinkedIn is pretty cautious about data privacy.
  • Someone said that OpenAI does not keep detailed information, but instead anonymizes them, so it was okay to submit such data.
  • Someone else said, while he could not speak to the anonymization of the data, the OpenAI does keep the details.

As you can see, there are some people who has no qualms about running such data through ChatGPT because it was “so much easier to pull the information and everything was public anyway”.

Yeah, that’s probably Meta’s opinion too when they did some kind of testing long time ago.

Just because it is easier to cull data through AI doesn’t mean it is okay to do it.

My current opinion ethics in AI

Consider this:

  • When anti-virus companies try to sell you on cleaning up personal data found on the web, they often mention emails.
  • Google “have I been pwned” and Google will add “check if your email has been compromised”.
  • When I once asked ChatGPT to help me develop a code to find emails in a text file, it refused to help me.

Are you getting the message?

Your emails are regarded as private information.

For now, since the law has not landed on the legalities of generative AI and its various uses, it is best to take a conservative approach and not put people’s data through generative AI, even if they type in such data in virtual calls. They enter that data, trusting you will deploy judgment in handling that data carefully.

There are people out there who do not want you to run those text full of data through generative AI; please respect their wishes. I’m sure you don’t want people to publicly announce your emails, your social media, your phone number and your physical address; otherwise known as doxxing. As a matter of fact, putting those data into generative AI may be regarded as a form of doxxing.

So, just take the conservative route and don’t do it. If the host of the call does not announce the chat text is not private, assume that you do not have permission to run the text through generative AI.

Similar Posts