Here’s an artificially-intelligent system that is capable of creating fake news so convincing that its creators have decided not to release their research publicly, citing fear of potential misuse.
OpenAI, the nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2, is at the high risk of malicious use. So it’s better to give the team more time to discuss the ramifications of the technological breakthrough.
If you feed any text to the AI system (from a few words to a whole page), it can produce the next few sentences only after a brief prompt. The system not only pushes the boundaries in terms of the quality of output but also the wide variety of potential uses.
“Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper,” read the new OpenAI blog.