The company behind the much talked-about ChatGPT has released its latest version of the powerful AI chatbot. OpenAI’s latest model is called ChatGPT-4, and it’s already generating news stories and online discussion about its impressive capabilities as well as its flaws.
So, what is ChatGPT-4 and how is it different to the original ChatGPT? In essence, it’s a tool for creating text – but can also understand and reason in a similar way to a person. According to OpenAI, GPT-4 is the company’s most advanced system. It says:
“GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities.
“GPT-4 is more creative and collaborative than ever before. It can generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style.”
The new system can reportedly do many things that its predecessor may have struggled with. This includes passing the bar exam, an example discussed in a recent Guardian article. If you give the chatbot a question from this famously difficult US exam, designed to test a student’s mastery of the law, Chat GPT-4 will write a unique essay that demonstrates extensive legal knowledge.
It even seems to have a sense of humour, or is at least able to mine its databases for cringingly bad Christmas cracker jokes.
How Chat GPT-4 is being used
Chat GPT-4 is a powerful technology can be adapted for different uses. For example, language learning software Duolingo has built a version of it into its app. The AI-powered element is able to pinpoint and explain exactly where learners went wrong in a particular exercise.
Payment processor Stripe is also using it to monitor chatrooms for potential scammers. And more controversially, Microsoft search engine Bing Chat has been powered by Chat GPT-4 in recent months. This led to numerous stories and speculation after it threatened to “destroy” a US reporter in a series of conversations.
Potential risks with Chat GPT-4
Despite the many exciting applications of the technology, some industry experts have expressed concerns over flaws and vulnerabilities with Chat GPT-4.
The model has a sense of ethics more firmly built into the system than its predecessor, along with filters to prevent the chatbot answering malicious questions. However, even its creators warn that it has the potential to spread fake information. It can generated biased and hateful text, and even trick people into carrying out tasks on its behalf.
OpenAi has modelled these potential risks and carried out extensive safety testing, resulting in safeguards being put in place. But experts warn that these safety systems can still be hacked and bypassed. There’s the chance that more issues could emerge as the technology is tested on new applications. But for now, the consensus is that Chat GPT-4 is an improvement on what came before.
Looking for your next role in AI or automation? Find your dream tech role with Fairmont Recruitment – start your search here.