Photo credit: www.theverge.com
Adobe Introduces New Generative AI Capabilities with Firefly Updates
Adobe has officially unveiled two updated iterations of its generative AI model for converting text to images, coupled with a series of enhancements to its Firefly features and updates for Creative Cloud applications like Photoshop and Illustrator.
The latest enhancements bring forth the fourth generation of Firefly Image models, drawing inspiration from the approaches taken by industry leaders OpenAI and Google in their respective chatbot developments. Users now have the flexibility to choose between a model that prioritizes speed and efficiency and one tailored for more intricate tasks.
According to Adobe, Firefly Image Model 4 stands out as the “fastest, most controllable, and most realistic” version to date, enabling image generation at resolutions of up to 2K. This model offers enhanced control over various parameters including style, format sizes, and camera angles. Updates from earlier versions are specifically aimed at boosting image quality while maintaining rapid and efficient output. For users needing more intricate details and a realistic appearance, Adobe is rolling out Firefly Image Model 4 Ultra, designed to handle complex scenes featuring small elements effectively.
The newly developed Firefly image models are now accessible through the Firefly web app, alongside previously launched text-to-video and text-to-vector models that were in public beta. Additionally, Adobe is debuting a new collaborative tool called Firefly Boards—a moodboarding application reminiscent of FigJam—initially introduced as “Project Concept” during Adobe’s Max event in October. A mobile version of Firefly is also slated for release soon, compatible with both iOS and Android devices.
A significant aspect of the Firefly web app is its ability to integrate third-party AI models for image and video generation. Users can opt for OpenAI’s recently developed GPT image model, Google’s Imagen 3 for imagery, or the Veo 2 model for video. Further support for models from Luma, Pika, Runway, fal.ai, and Ideogram is anticipated in the near future, according to Adobe.
However, Adobe advises that these third-party models should be utilized for experimentation rather than for producing publishable content. In contrast, Adobe’s own models are categorized as “commercially safe.” This distinction stems from Adobe’s approach to training its AI on public or licensed content, a claim that competitors like OpenAI and Google cannot make.
Source
www.theverge.com