Researchers have used artificial intelligence to improve translation between programming languages or automatically fix problems in recent years. Recently, Microsoft and OpenAI shared plans to bring GPT-3, one of the world’s most advanced models for generating text, to programming based on natural language descriptions. This is the first commercial application of GPT-3 undertaken since Microsoft invested $1 billion in OpenAI last year and gained exclusive licensing rights to GPT-3.
- Highly sophisticated Technology: Microsoft VP Charles Lamanna announced the sophistication offered by GPT-3 can help people tackle complex challenges and empower people with little coding experience. GPT-3 will translate natural language into PowerFx, a fairly simple programming language similar to Excel commands that Microsoft introduced in March.
- Artificial progression: “If you can describe what you want to do in natural language, GPT-3 will generate a list of the most relevant formulas for you to choose from,” said Microsoft CEO Satya Nadella in a keynote address at the company’s Build developer conference. “The code writes itself.”
- Expansion of language models: Microsoft’s new feature is based on a neural network architecture known as Transformer, used by big tech companies including Baidu, Google, Microsoft, Nvidia, and Salesforce to create large language models using text training data scraped from the web. These language models continually grow larger. The largest version of Google’s BERT, a language model released in 2018, had 340 million parameters, a building block of neural networks. GPT-3, which was released one year ago, has 175 billion parameters.
- Ability to accomplish the task: OpenAI currently provides private beta access to GPT-3. GPT-3 has demonstrated an ability to accomplish tasks ranging from completing SAT analogies correctly to answering questions or generating text. It’s also generated text that involves sexual acts with children and generates offensive text about Black people, women, and Muslims. OpenAI has shared little about how it uses filtering methods to try and address such toxicity; if OpenAI can’t figure out how to eliminate offensive or toxic comments generated by GPT-3, that could limit its use.
- Futuristic goals: As researchers and programmers learn more about how language models can simplify coding, Dan Hendrycks , the programmer of Coding from Berkeley, believes there will be opportunities for big advances. He thinks AI that suggests the next line of code could improve the productivity of human programmers and potentially lead to less demand for programmers or allow smaller teams to accomplish goals.