GPT-4 has arrived. It is the successor to the GPT-3.x, the underlying technology for the wildly popular ChatGPT. In a word: wow!
If you have a paid subscription to ChatGPT, you can try out the text-based version of chat powered by GPT-4, but it’s not too exciting unless you ask it a calculus question or pose a coding challenge or ask it to do another task you know GPT3.5 wasn’t great at.
The “magic” of GPT-4 is that the model is multimodal, which is a fancy way of saying it can “see.” If you upload an image, it can describe and respond to the content. OpenAI’s announcement video is show-stopping. From drawing a wireframe of a website homepage – and having GPT-4 code it – to the model identifying why something was “funny” in a still image, the demonstrations are so amazing that they border on disturbing.
To say I’m in love with this technology would be to understate my passion. It also scares the hell out of me, the way being in love often does. Yet, my emotions run deeper still. I have abject respect for the power of GPT-4. (You should, too.) I also fear how it will be used in the wrong hands. Perhaps my greatest fear is that I don’t know how to define the range of the term “wrong hands” in the previous sentence.
That said, technology is neither good nor bad; that distinction is for human beings. With the advent of Google’s new AI for the Google Workspace (rolling out now), the countless apps about to launch powered by GPT-4, and whatever Meta is going to bring to market, we are living in a new and very exciting time.
Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.