Gemini Gets A Banana-Powered Image Upgrade
Google has unveiled a powerful enhancement to its Gemini app by integrating a new AI image‐editing model, codenamed Nano Banana, officially branded as Gemini 2.5 Flash Image. The update empowers users with tools to consistently preserve likeness in multi‐step edits while opening up creative possibilities with unprecedented realism and control.
Users can now instruct Gemini via natural language to modify images-whether it's changing hairstyles, adding props, or altering backgrounds-without compromising the identity or visual consistency of subjects such as people, pets, or objects. This“character consistency” addresses a long‐standing criticism of AI editors, where subtleties often shift between edits, producing results that feel“close but not quite the same.”
Beyond preserving likeness, the tool introduces multi-turn editing, enabling users to engage in iterative spot‐on adjustments-in effect, editing a specific part while the rest remains intact. Users might start with an empty room, then sequentially paint the walls, add furniture, and fine‐tune décor without compromising earlier changes. Moreover, the design mixing feature allows one to apply aesthetic elements from one image, like a texture or pattern, to an object in another-say, transferring butterfly‐wing motif onto a dress or using floral textures to style rain‐boots.
Interoperability marks another leap forward. Users are now able to merge multiple images-like combining a selfie with a picture of a pet-into inventive new compositions while maintaining fidelity to each subject's appearance.
In terms of accessibility, Nano Banana is available globally to both free and premium users of the Gemini app across web and mobile. Developers can also leverage the model via Gemini API, Google AI Studio, and Vertex AI platforms. Images generated or edited through Gemini-which employ SynthID technology-carry visible watermarks as well as embedded digital identifiers to signal AI origin.
See also AI-Driven Cyber Defence Takes Centre Stage in New EraGoogle's ambition is clear: to refine user control and visual fidelity in AI‐powered imagery. According to Nicole Brichtova, a product lead at Google DeepMind, the model raises the bar for seamless editing and instruction‐following, producing outputs that are“usable for whatever you want to use them for.”
Notice an issue? Arabian Post strives to deliver the most accurate and reliable information to its readers. If you believe you have identified an error or inconsistency in this article, please don't hesitate to contact our editorial team at editor[at]thearabianpost[dot]com . We are committed to promptly addressing any concerns and ensuring the highest level of journalistic integrity. Legal Disclaimer:
MENAFN provides the
information “as is” without warranty of any kind. We do not accept
any responsibility or liability for the accuracy, content, images,
videos, licenses, completeness, legality, or reliability of the information
contained in this article. If you have any complaints or copyright
issues related to this article, kindly contact the provider above.
Most popular stories
Market Research

- Mediafuse Joins Google For Startups Cloud Program To Scale AI-Driven, Industry-Focused PR Distribution
- PLPC-DBTM: Non-Cellular Oncology Immunotherapy With STIPNAM Traceability, Entering A Global Acquisition Window.
- New Silver Launches In California And Boston
- Invromining Expands Multi-Asset Mining Platform, Launches New AI-Driven Infrastructure
- Forex Expo Dubai 2025 Returns October 67 With Exclusive Prize Draw Including Jetour X70 FL
- Innovation-Driven The5ers Selects Ctrader As Premier Platform For Advanced Traders
Comments
No comment