Tech companies are adopting three different approaches to releasing generative AI models: the cautious, the clever and the possibly crazy. Whichever approach prevails may well determine who makes the most money out of the artificial intelligence revolution, but it may also have far broader implications.For years, tech companies have experimented with powerful generative AI models, which can almost magically conjure up text, images and code when prompted. But the Silicon Valley giants have been wary of opening up these models to the public for fear of embarrassing blowback. Google, which probably boasts the most AI expertise, has been notably cautious in giving access to its technology, although it has now released, somewhat clumsily, its own Bard model.
Last November, Meta pulled its Galactica AI service three days after launch. Users were quick to ridicule Galactica — designed to summarise academic papers and generate articles — for spewing out nonsense. Shortly afterwards, OpenAI, the San Francisco-based research company, had a storming success with the launch of its ChatGPT model. Within two months, it had been tried by 100mn users, one of the fastest ever take-ups of a consumer internet service. The move triggered an explosion of investor interest. OpenAI has attracted a further $10bn of investment from Microsoft, at a $29bn valuation.
OpenAI was able to launch its model with relatively little embarrassment because it had built in some basic guardrails to prevent its model being maliciously exploited. Before release, OpenAI had fine-tuned its model by employing a technique known as Reinforcement Learning with Human Feedback. Humans helped train the model to reduce the chances it would generate toxic content. The millions of users who have since experimented with ChatGPT provided priceless real-world feedback, allowing OpenAI to pursue further iterative deployment.