Until just a few days ago, there was something called Sora implemented on the ChatGPT interface. Then, without me noticing, it disappeared. Soon after, a new version, Sora2, appeared. This is OpenAIâs generative AI for video creationâbut itâs invitation-only. Without an invite code, you canât try it.
Yesterday, while attending a seminar, I learned something surprising: you can actually access Sora2 through Gensparkâs AI video feature!
Trying Sora2 via Genspark
Hereâs how I tried it:
Go into AI Video (not Super Agent). Click the option where you normally see Gemini Veo3. To my surprise, the label âSora2â appeared!
Of course, I had to test it.
Introducing Our âFamily Membersâ
For context (especially for overseas readers):
Copan is a plush toy that my wife and I treat as our son. Since we donât have children, he has become like family to us. Bary-san is another plush, based on the official mascot of Imabari, Ehime (Japan). We have the small-sized version.
So, naturally, I asked Sora2 to let these two âtalkâ with each other. My prompt was pretty roughâsomething like âHave a fun conversation.â
The Results
The generated video took quite a while, but hereâs what I noticed:
The two characters indeed spoke, but the conversation made little sense. The voices were so fast that I couldnât follow them. I tried Gensparkâs transcription, but the text didnât line up at all. Honestly, YouTubeâs auto-captioning works better. Interestingly, the video didnât have the âSora2â watermark. Iâm not sure if thatâs intentional.
So, the experiment was messy, but still fascinating!
Final Thoughts
This wasnât a serious projectâmore of a detour. But the pace of change in generative AI is astonishing. Just when I think Iâve grasped one tool, another appears. Right now, Iâm also struggling with Cursor, so it feels like weâre always running to keep up.
Still, watching Copan and Bary-san âchatâ thanks to AI was a small joy. Even if the words didnât make sense, the fact that itâs possible at all is amazing.
