How ChatGPT Developed? Let’s see structure of OpenAI ChatGPT.

Babar Ali Jamali
2 min readJan 27, 2023

--

ChatGPT, also known as GPT (Generative Pre-trained Transformer), is a language model developed by OpenAI. It is based on the Transformer architecture, which was introduced in a 2017 paper by Google researchers.

The development of ChatGPT involved several stages:

1. Data collection: A large dataset of text data was collected and pre-processed for training the model. This dataset included a wide range of text from various sources such as books, articles, and websites.

2. Pre-training: The model was pre-trained on this dataset using the transformer architecture. The pre-training process involves training the model on a large dataset of text data to learn the general patterns and structure of natural language.

3. Fine-tuning: The pre-trained model was then fine-tuned on smaller, task-specific datasets for specific tasks such as language translation, question answering, and text summarization.

4. Deployment: The fine-tuned models were deployed in various applications such as chatbots, automated writing, and language translation.

The development process of GPT models is ongoing and new version of GPT (GPT-2, GPT-3) are released with the advancements in hardware and with the more data availability. The GPT-3 model is the most advanced version of ChatGPT, which was trained on much larger dataset, and with more powerful machine.

--

--

Babar Ali Jamali

I am IT Professional, Google Cyber Security, IBM's Certified Cyber Security Analyst, Web Developer, Vulnerability/Malware Analysis and Python Programmer.