On Generative AI and LLMs
OpenAI's ChatGPT, changed the landscape of AI for good. Not that we did not have AI applications until then, and not that we did not have Large Language Models. NVidia was there 6 years ago!
I was quite impressed when I first stumbled upon StyleGAN from NVidia, a face generator. I even played a bit trying to generate fake social media accounts - just to figure out which social media platforms were ready for that, and which ones were not.
So although Generative AI is not new, OpenAI gave a simple interface, using large language models, so that everyone can interact with a GPT to get an output. While NVidia's approach was quite automated and one would get a random fake face easily, I do not recall to have found an easy way to ask the face to be of a particular "type". No eye-color, race, hair color or style possible.
By providing the simple interface to interact with ChatGPT, OpenAI created a new profession - that of "prompt engineers". And, as everyone could have predicted, that came with its own set of security challenges. There are so many cases that prompt engineering was used to get data out of the AI chatbot one is conversing with, that the chatbot was not considered to release, that prompt security is becoming a significant consideration for anyone who wants to implement a GPT based agent.
Although I played with multiple challenges around prompt engineering, the most fun one I found was a Prompt Injection lab from ImmersiveLabs. And one of the easiest, and more well thought of theoretical approach and explanation was Introduction to Generative AI & Prompt Engineering from Cloud Security Alliance. A training course I enjoyed, and of course completed.