DEV Community

Cover image for Is OpenAI still open?
Sebastian Schuchmann
Sebastian Schuchmann

Posted on

Is OpenAI still open?

OpenAI announced a partnership with Microsoft, that grants them exclusive source-code and model access to GPT-3 without using the API provided by OpenAI (like everyone else). This can be seen as another move away from openness, to secrecy and commercialization.

But is OpenAI really going against its own founding values, or is this just a necessary step to fuel their progress in pursuit of reaching AGI, Artificial General Intelligence?

Sam Altman with Satya Nadella

Some History

In 2015, OpenAI was started. It immediately got on everyone's radar. The founders and investors included many prominent figures like Elon Musk, Sam Altman, Reid Hoffman, and Peter Thiel.

Their original mission was based on the fear that AGI might be created sooner than we think and if this were the case, it would be in the best interest of humanity if as many people as possible had access. The goal was clear: do fundamental AI Research in order to reach AGI and release everything publicly - preventing a single company or nation to have all the power.

The company explores different paths that may lead to AGI, like Reinforcement Learning or Natural Language. Whereby the latter, now with GPT-3 seems like a realistic way to reach it. With staggering results in Games, Robotics, and Natural Language - they quickly became well known even to the general public.

The first time OpenAI strayed off their path was with GPT-2. Instead of releasing the model and source code to the public, they started off just releasing a small version with very limited capabilities.

GPT-2 has 1.5 Million Parameters

After gathering public feedback, they ended up settling on a staged release, gradually releasing larger and larger models. Some frame this as a publicity stunt, as many articles used the non-release of GPT-2 as fuel for fearmongering. This is of course a possibility, but they also had valid concerns regarding the spread of misinformation by using GPT-2 as a news generator and in the end, they released the full model to the public in keeping with their mission statement.
In my opinion, the staged release of GPT-2 was a good safety measure and commercial incentives to delay the release don't seem like the prime motivating factor as they haven't sold any access GPT-2.

Sam Altman former CEO of Y-CombinatorIn 2018/2019, a restructuring of the company occurs. Not only did Elon Musk leave completely but Sam Altman became CEO of OpenAI. Shortly after the company, originally started as a non-profit, became a sort of hybrid. The company now consists of two parts, a for-profit company called OpenAI LP and the original one which remained a non-profit. They are calling OpenAI LP a capped-profit as the return investors can get is capped. Though, as the cap is 100x, it's somewhat questionable how meaningful this cap is.

Their Reasoning for this split is to increase their ability to raise capital and attract employees with startup-like equity. They argue, to keep up with companies like Google, Amazon, and Facebook - a non-profit model just doesn't work.
This is a key turning point for OpenAI and everything following from the 1 Billion Dollar Investment by Microsoft to releasing GPT-3 as a commercial product and now the exclusivity deal with Microsoft stem from this change in corporate structure and leadership.
So, what's the state now?

Can we still consider OpenAI, open?

OpenAI claims that the way they operate is still in-keeping with their goals. It's not that their fundamental goals have shifted, just the path to get to them. In order to democratize AGI in any meaningful way, you also have to be first to get there and it makes sense that a huge amount of capital is required to achieve that.

On the other hand, it's hard to expect people to trust OpenAI to release AGI without profit in mind if they are already charging for the use of their current models. You can argue by commercializing GPT-3 and licensing it to Microsoft a precedent has been set.

At the same time, it has to be understood that a model as large as GPT-3 couldn't just be released to the public, expecting that everybody now has the ability to use it. The amount of computing and expertise necessary to deploy such a model is immense and just not realistic for small companies or independent researchers. So allowing access via an API can be argued as the most open way possible.

Though true, another reason they mention in favor of accessing GPT-3 via an API is safety. With an API, it's easy to block anyone abusing it for some malicious use. A sound argument on its own but considering the exclusivity deal with Microsoft, who will probably also allow some sort of access to the model via its azure services - it's questionable if Microsoft will follow the same guidelines as OpenAI. OpenAI is basically trusting Microsoft to have the same safety principles as they do.

My opinion, summarized

To summarize, I think it's impossible to argue that OpenAI hasn't sacrificed transparency and safety for commercialization. At the same time, I think they have the right to adjust their values and it makes sense to commercialize products to fuel their progress. I worry that as their reliance on commercial products increases, the funding for fundamental research diminishes. In the case of GPT-3, it seems like both commercial success and fundamental, scientific research was achievable, but we know that this is not always the case: Some topics deserve research even if commercial success is unlikely.

All in all, I am still very glad that OpenAI exists and it's not just independent researchers against giant tech companies or nations, because it's very clear who would win that race.
What do you think?

Top comments (0)