Galactica: AI tool for scientists
There’s been a LOT in the news lately about various kinds of AI: mainly the ones impacting art (Stable Diffusion, MidJourney, Dalle-2) and the ChatGPT which I think impacts writing of some sort. But there was one – probably a language model kind of AI like ChatGPT – that kind of slipped by under the radar: Meta’s Galatica, an AI for the scientists.
According to MIT Technology Review, the AI did not last more than 3 days before Meta’s team had to take it down. The tool could not differentiate false information from true facts and would conjure up false papers.
Even the scientific world is facing AI but not quite in the same way as the rest of us. This tool was supposed to help scientists and students perform research faster. At present time, the language models are not ready for prime time.
One question that does cross my mind is why did Meta release this software in the state it was in? Did the team not know that the software would have difficulty discerning the falsehoods? Was there testing done?
The only thing I could come up with is the “move fast and break things” adage coined by Facebook maybe two decades ago. Instead of shooting for “perfection” (“perfection is the enemy of good”), the software world will try to move fast and throw a minimum viable product out there to see if it sticks. I would think a MVP would have some level of standards, but I am not an expert in this world. So, maybe the Meta team thought the software was good enough to try out in the wild.
Apparently, the scientists didn’t like it. Language models have not arrived yet.
For now, the scientists will have to hunt for research materials manually.