Session 2: Real-Time Image Generation and Interactive Practice
Time: Thursday, 12th December 2024, 6-9pm
Duration: 3 hours
What we did at second session:
- Overview of Real-Time Image Generation Tools such as DALL-E, Stable Diffusion, and RunwayML
- Generating images based on text prompts
- Examples and exercises for creating thematic connections between visual and textual outputs
Outcome of the workshop:
I got a practical understanding of how to navigate and use latent spaces in LLMs and real-time image generation for creative projects. I gained insights into combining AI-driven text and visuals for innovative storytelling and artistic practices, using accessible, open-source models.
My opinion:
I didn’t realize that Real-Time Image Generation even exists, because at first I thought that we will be learning more about general Image Generation with AI tools.
At one point, I was fascinated with some tools and how quickly they process visual data instantly as it is received.
The workshop was interesting but it wasn’t that useful, at least not for my thesis.