GPT-4

The GPT-4 Technical Report by OpenAI details the development of GPT-4, a multimodal model that processes both text and image inputs to produce text outputs. GPT-4 shows human-level performance on various professional and academic benchmarks, including scoring in the top 10% on a simulated bar exam. The model was trained using a diverse and extensive dataset, incorporating both correct and incorrect solutions and a variety of reasoning types. Post-training alignment through reinforcement learning with human feedback (RLHF) was used to enhance factual accuracy and adherence to user intent. GPT-4 also includes improvements in safety features, making it 82% less likely to produce inappropriate content compared to GPT-3.5, and is more reliable in high-stakes contexts