Original blog post (better UI): Introducing PaLM + RLHF: The Open Source Alternative to OpenAI's ChatGPT
OpenAI's ChatGPT, a powerful language model that can assist with tasks such as drafting emails and suggesting computer code, has been widely used by businesses and individuals alike. However, ChatGPT is not open source, which has limited its accessibility to some users. But now, a new open source alternative has arrived in the form of PaLM + RLHF.
What is PaLM + RLHF and How Does it Work?
PaLM + RLHF, developed by Philip Wang, is a text-generating model that combines PaLM, a large language model from Google, with Reinforcement Learning with Human Feedback (RLHF). RLHF is a technique that aims to better align language models with what users wish them to accomplish. It involves training a language model and fine-tuning it on a dataset that includes prompts (e.g., "Explain machine learning to a six-year-old") paired with what human volunteers expect the model to say (e.g., "Machine learning is a form of AI..."). The volunteers then rank the responses from best to worst, and the rankings are used to train a "reward model" that filters for the top answers to a given prompt.
While PaLM + RLHF can accomplish many of the same tasks as ChatGPT, it is not pre-trained and therefore requires significant resources to be trained and run. This includes compiling gigabytes of text for the model to learn from and finding powerful enough hardware to handle the training workload. In fact, PaLM + RLHF is so large (540 billion parameters) that it requires a dedicated PC with around eight Nvidia A100 GPUs to run. Cloud alternatives are also pricey, with the cost of running OpenAI's text-generating GPT-3 (which has around 175 billion parameters) on a single Amazon Web Services instance estimated at around $87,000 per year.
Conclusion:
While PaLM + RLHF may not be a replacement for ChatGPT at this time, it is an important development for those who are interested in using open source alternatives. Additionally, other efforts to replicate ChatGPT, such as the one led by CarperAI in partnership with EleutherAI, Scale AI, and Hugging Face, are making progress and may offer more readily available options in the future.
Top comments (1)
Oh I can think of a way to train it on the cheap. 20 million volunteers get it done very fast. {volunteer computing projects; a type of distributed computing where volunteers donate computing time to specific causes. The donated computing power comes from idle CPUs and GPUs in personal computers, video game consoles[1] and Android devices.}This is how we Mapped human DNA.
The Leela Chess Zero is a Chess Engine using Deep neural network Trained on distributed Volenteer computers. Same exact concept and it worked. So Proof concept exists already.
en.wikipedia.org/wiki/List_of_volu...
lczero.org/play/download/