I’ve played around with Midjourney to generate images and it was an insane amount of fun so I thought I’d try out Adobe’s beta version of Firefly - Generative Fill - in Photoshop. According to Adobe, they "... integrated Firefly directly into Photoshop, marrying the speed and ease of generative AI with the power and precision of Photoshop. This is the start of the company’s major initiative to integrate generative AI into existing creative workflows across Creative Cloud, giving users a creative co-pilot to accelerate ideation, exploration and production.” (adobe.com)
If you haven't heard of Generative AI, check out Adobe's sizzle reel to get an idea of what can be done.
Like with most new things, it takes some time playing around with the software to become familiar with its capabilities. I gave myself nearly a month to explore and learn how it works.
With my first attempt I realized I can’t click off of Photoshop and work on another monitor while it’s generating the request. After nearly six minutes I realized the progress bar hadn’t moved past 1/3 completion, but when I clicked on it it swiftly generated the request. With my second generation request, I stayed in Photoshop and it created the result within seconds.
Request #1 - Add a water bottle to the image (selected the space next to the aloe)
This was pretty straight forward, it selected three varying styles of water bottles and placed them in the designated area. Though, I’m not sure about what it concocted for the third water bottle and looked like it was floating. Thankfully you can rate it with a thumbs up or thumbs down to help it learn.
Request #2 - Add a matching white washed brick wall (adjusted the canvas size to add an empty space to the right of the photo)
This was less successful. Four out of six were white brick, but the angles didn’t fully make sense, they were more painted white than white wash (maybe I should’ve spelled it as whitewash), and two weren’t even brick - one was a wood slat door and the other was a curtain.
I did learn, if you’re not happy with the first three results, you can regenerate to get three more.
Request #3 - Add a cat sitting on the table (selected a taller space next to the aloe)
Hiding the water bottle layer and selecting a larger area that included part of the aloe, I wanted to see how it would interact with overlapping objects. I was pleasantly surprised that it didn’t have an issue making it look like the inserted object was behind an existing object in the photo.
Request #4 - Add a logo to the planter
I know it’s not difficult to add a logo to a product in Photoshop, but I wanted to see how Firefly handled it. Mainly, would it recognize the curve of the object? The results weren’t great. It didn’t treat the planter as a curved object and just slapped oversized “logos” on the front.
Request #5 - Extending the image
By selecting the empty, extended canvas while slightly overlapping the existing image, I was able to create a seamless extended background. This is great if you have an image you really want to use on a flyer but it's not quite wide enough and you don't want to crop anything out from it.
Request #6 - Add a coffee shop logo
To see how our industry can really utilize this feature, I changed images and asked Firefly to add a coffee shop logo to the selected area of the t-shirt. Like with other image based AI, it has trouble with spelling and letters. This fact aside, the blending of the generated logos on the shirt was nice, it looks like it’s actually printed on the shirt.
I did try to see if I could blend the stock image with a logo I added to the file, but this still has to be done “the old fashioned way”. The result was it generating new logos that looked nothing like the original logo (last image in the series below).
Request #7 -Add a new background
To customize your virtuals even more, change the backgrounds to match the vertical market. I added the logo to the shirt like normal and then asked Firefly to "add a blood drive" for the background. At first I was adding the word "background" to my prompt but that was inserting solid colors instead of actual photos. Having the model with the promotional shirt in a more relevant setting elevates the virtual.
To wrap up my month of testing, do I think this a cool addition to Adobe? Yes. Is it a "co-pilot" for users? It depends on what you're doing in Photoshop. Is it a new tool to help with product virtuals? Not really, but I'm anxious to see if it will be in a future iteration.