OpenAI GPT-3 can generate code: I’m an AI and I’ll explain how!

Must Read

Brian Adam
Professional Blogger, V logger, traveler and explorer of new horizons.
- Advertisement -
- Advertisement -
- Advertisement -
- Advertisement -

OpenAI GPT-3 can generate code: I'm an AI and I'll explain how!

OpenAI researchers have published a document describing a cutting edge linguistic model composed of 175 billion parameters. The previous OpenAI GPT model had 1.5 billion parameters and was among the largest at the time.

It was only surpassed by NVIDIA Megatron with 8 billion parameters and Microsoft’s Turing NLG, which had 17 billion parameters. Now Open AI has changed the foundations of this technology, releasing a model 10 times larger than TuringNLG.

With 175 billion parameters, GPT-3 is currently the king of large neural networks. The larger networks may not actually be the best, but the fact that OpenAI has managed to outperform the previous Open AI GPT and TuringNLG will likely increase, and not diminish, the desire for ever larger neural networks.

More than 30 OpenAI researchers have published models that allow obtaining cutting-edge results in terms of activities such as generating articles and news.

In another paper published in the open-source journal Nature Communications, OpenAI researchers demonstrated that a neural network trained through the GPT 3.0 version of the Open AI model it can produce consistent images. We are living in exciting times and, thanks to the next research, we will be able to improve our understanding of AI capabilities and limitations.

OpenAI GPT-3, between present and future

Here the editor begins to write. Yes, you got it right: the one above is an article written by an artificial intelligence, more precisely from that of AIWriter, which we had already used in the past for the first article written on these pages by an AI and for other projects. Put simply, we had artificial intelligence write an article (obviously in English, the above is a translation) on OpenAI GPT-3 or the natural evolution of the version we had tried in 2019.

The AI ​​has already explained a lot to you about today’s news, but, given its limitations (we had to “cut” some paragraphs that are not exactly consistent and the explanation is not perfect), it is better to deepen everything. Essentially, in the past few weeks, the first developers got to access OpenAI GPT-3 (Elon Musk no longer wanted to know because of the possible implications). As always happens in these cases, the developers are trying to exploit the new API in every possible way. Among the most interesting projects, there is certainly that of Sharif Shameem, who published on his Twitter profile a video that is travelling around the world (it has already collected 1.6 million views).

In the latter, which you can see at the bottom of the article, you can see it an awesome layout generator, able to respond immediately to user requests via JSX code, a React component that has a syntax that is considered an extension of that of Javascript. For the less experienced, what created by Shameem has enormous potential. For example, just type, currently in English, the words “one button for each colour of the rainbow” and the tool will show what you asked for on-screen. This is just an example, but in the video, you can see much more, including the generation of tables.

Clearly, the spread of this video has generated a lot of interest, especially among developers. However, as noted by Rob Toews of Forbes, although the results that are being seen in these days are particularly interesting, it is good, for the moment, not to overestimate the AI. In fact, there are some contexts in which Open AI GPT-3 fails to function properly. More precisely, there are several questions to which artificial intelligence is unable to answer correctly, making mistakes that the human being would hardly make (for example, claiming that the feet have two eyes).

This means that, despite the many very interesting applications, these applications are currently not able to replace the human being, as in several cases the text generated by artificial intelligence may not have real meaning. The articles published on these pages written through AI are certainly interesting, but as you may have noticed are extracts of a few lines on specific topics, which by the way have a lot of online documentation. In short, for the moment you can rest assured, OpenAI’s AI is incredible, but for now, it can’t replace a real person.

It is no coincidence that Sam Altman, CEO of OpenAI, also said, through his own official Twitter profile, that GPT-3 is generating a little too much hype. Altman said he was sure that artificial intelligence will change the world, but that for the moment we are still in a preliminary phase and we must understand how to best use this technology.

Below you will find the original text in English generated by AI Writer (it’s the AI ​​we wrote the article about, not to be confused with the OpenAI GPT-3 we are talking about). For obvious reasons, we have kept the Italian translation as faithful as possible to the English text.

Researchers from OpenAI have published a paper describing a state-of-the-art language model consisting of 175 billion parameters. The previous OpenAI GPT model had 1.5 billion parameters and was the largest model at the time, surpassed only by NVIDIA Megatron with 8 billion parameters, followed by Microsoft’s Turing NLG, which had 17 billion parameters. Now Open AI has turned the tables and released a model that is 10x bigger than the TuringNLG.

With 175 billion parameters, GPT 3 is currently the king of large neural networks, and while larger networks may not ultimately be the best, OpenAI’s ability to improve the results of the previous Open AI G PT model and Turing NLG is likely to fuel, not diminish, the desire for ever larger neural networks.

More than 30 OpenAI researchers have published models that can achieve state-of-the-art results in tasks such as generating news articles.

In another paper published in the open-source journal Nature Communications, researchers from OpenAI have shown that a neural network trained on the GPT 3.0 version of the Open AI model can produce coherent images. We are living in exciting times, and with the research that is next in our pipeline, we will be able to improve our understanding of the capabilities and limitations of AI“.

- Advertisement -

Subscribe to our newsletter

To be updated with all the latest news, offers and special announcements.

- Advertisement -