AI can ruin your reputation.
There are quite a few ways that AI can ruin your reputation, so you don’t need to help it along. AI could falsely claim that you committed some crime, as quite a few search results show. Facial profiling could be erroneous. Or the LLM could be using discriminatory bias to lock you out of opportunities. Or it just hallucinates; just ask Jonathan Turley.
So, you don’t need to help it trash your reputation like those scientists who submitted a research paper to a science journal. The paper supposedly was peer reviewed, but one look at the paper suggests otherwise.
The authors of the research paper submitted AI generated images.
Okay, we’re talking about images, so you get my drift here. AI images still tend to be funky although they are getting better. You have to choose wisely your images.
At least the authors announced that the images were AI generated but they certainly did not “proof” them. If you want to read an article discussing about this fiasco, read here and man, is it a dilly. You will see the images submitted to the journal and you will understand the brouhaha.
You will see Twitter copies in the article from scientists just gob smacked at the audacity of submitting such grossly erroneous imagery. Didn’t those authors review those images? Why would they submit those pictures?
Not only did the journal suffer reputation harm and had to retract the research paper, I imagine the authors are also reputationally ruined. I can’t imagine any journal allowing their research papers to grace the pages of the journal in the future.
The lesson behind this story is to always proofread AI generated work, whether text or images, to make sure there are no hallucinations or outright wrong facts. Always, always, always double check its content, its links, its images, its assertions.
You must be logged in to post a comment.