InsightEdit: Towards Better Instruction Following for Image Editing

Yingjing Xu1,2,* , Jie Kong2,*,‡ , Jiazhi Wang2, Xiao Pan1, Bo Lin1,‡ Qiang Liu2
1Zhejiang University, 201.ai
*Co-first authors. Corresponding authors.
showcase

We propose InsightEdit, an end-to-end instruction-based image editing model, trained on high-quality data and designed to fully harness the capabilities of Multimodal Large Language Models (MLLM), achieving high-quality edits with strong instruction-following and background consistency.

Abstract

In this paper, we focus on the task of instruction-based image editing. Previous works like InstructPix2Pix, InstructDiffusion, and SmartEdit have explored end-to-end editing. However, two limitations still remain: First, existing datasets suffer from low resolution, poor background consistency, and overly simplistic instructions. Second, current approaches mainly condition on the text while the rich image information is underexplored, therefore inferior in complex instruction following and maintaining background consistency. Targeting these issues, we first curated the AdvancedEdit dataset using a novel data construction pipeline, formulating a large-scale dataset with high visual quality, complex instructions, and good background consistency. Then, to further inject the rich image information, we introduce a two-stream bridging mechanism utilizing both the textual and visual features reasoned by the powerful Multimodal Large Language Models (MLLM) to guide the image editing process more precisely. Extensive results demonstrate that our approach, InsightEdit, achieves state-of-the-art performance, excelling in complex instruction following and maintaining high background consistency with the original image.


Data Construction

We propose an automated data construction pipeline focused on generating high-fidelity, fine-grained image-editing pairs with detailed instructions that demonstrate advanced reasoning and understanding. We categorize the image editing tasks into three types: removal, addition, and replacement. Figure below presents our data preparation workflow.

Data Construction
The overall data construction pipeline. (1) Captioning & Object Extraction: Utilizing VLM to generate a global caption from the source image, and further get an object JSON list contains both simple caption and detailed caption. (2) Mask Generation: Utilizing GroundedSAM to obtain the corresponding mask of each object. (3) Editing Pair Construction: Utilizing mask-based image editing model to construct target image and templated instruction. (4) Instruction Recaptioning: Utilizing VLM to rewrite instruction to gain diverse instructions. (5) Quality Evaluation: Filtering the datasets using VIEScore.

Method

The overall architecture of InsightEdit is depicted in Figure below. It mainly consists of a comprehension module, a bridging module, and a generation module. Specifically, the comprehension module leverages MLLM to comprehend the image editing task; the bridging module integrates both text and image features into the denoising process of the diffusion model; and the generation module receives editing guidance via the diffusion model to generate the target image.

Method
The overall architecture of InsightEdit. It mainly consists of three parts: (1) Comprehension Module: A comprehension module that leverages MLLM to perceive and comprehend the image editing task; (2) Bridging Module: A bridging module that better interacts and extracts both the textual and image features; (3) Generation Module: A generation module that receives editing guidance via diffusion model to generate the target image.

Results

Qualitative Result
Qualitative comparison on AdvancedEdit. InsightEdit shows superior instruction following and background consistency capability.