Skip to main content
Generate high-quality images using ComfyUI workflows on Comput3 Network’s GPU infrastructure. Simply launch a GPU instance and start creating with powerful node-based workflows.

Available Image Generation Models

Choose from a wide variety of specialized image generation models:
Generate images with exceptional multilingual text rendering and editing capabilities
  • Model: Qwen-Image’s 20B MMDiT model
  • Specialty: Multilingual text rendering, advanced editing
  • Best for: Text-heavy images, multilingual content, detailed editing
Generate high-quality images from text prompts using unified multimodal model
  • Model: OmniGen2’s unified 7B multimodal model
  • Architecture: Dual-path architecture
  • Best for: High-quality text-to-image generation, versatile applications
Edit images with natural language instructions
  • Model: OmniGen2’s advanced image editing capabilities
  • Features: Text rendering support, natural language editing
  • Best for: Image modification, style changes, content editing
Generate physically accurate, high-fidelity images
  • Model: Cosmos-Predict2 2B T2I
  • Specialty: Physically accurate, detail-rich generation
  • Best for: Realistic images, scientific visualization, detailed artwork
Modified Flux architecture for enhanced image generation
  • Model: Chroma (modified from Flux)
  • Architecture: Enhanced Flux-based architecture
  • Best for: High-quality generation, architectural improvements
Multiple HiDream models for different use cases
  • HiDream I1 Dev: Development and testing
  • HiDream I1 Rapide: Fast image generation
  • HiDream I1 Complet: Full-featured generation
  • HiDream E1.1 Image Edit: Advanced image editing (better quality than E1)
  • HiDream E1 Image Edit: Image editing capabilities
Latest Stable Diffusion with advanced features
  • SD3.5 Simple: Standard text-to-image generation
  • SD3.5 Grand Canny ControlNet: Edge detection guided generation
  • SD3.5 Grande Profondeur: Depth-aware image generation
  • SD3.5 Grand Flou: Blur-based reference image generation
High-quality SDXL models with various capabilities
  • SDXL Simple: High-quality standard generation
  • SDXL Refiner Prompt: Enhanced results with refiners
  • Révisions de Texte SDXL: Reference image concept transfer
  • Révision Zéro Positive SDXL: Text prompts with reference images
  • SDXL Turbo: Single-step image generation
Zero-shot monocular depth estimation
  • Model: Lotus Depth in ComfyUI
  • Specialty: Efficient depth estimation with high detail retention
  • Best for: Depth-aware applications, 3D processing

Quick Start

1

Launch GPU Instance

Launch a GPU instance with the ComfyUI template from your Comput3 dashboard.

ComfyUI Template

Pre-configured instance with ComfyUI, popular models, and workflows ready to use.

Recommended GPU

RTX 4090 48GB, L40S, or A100 for optimal performance with image generation workflows.
2

Access ComfyUI

Connect to your GPU instance and open the ComfyUI web interface.
# ComfyUI runs on port 8188 by default
http://<your-instance-ip>:8188
The ComfyUI template automatically starts the service and makes it accessible via web browser.
3

Choose a Workflow

Select from pre-installed workflows or create your own:
Generate images from text descriptions
  • Stable Diffusion XL workflows
  • ControlNet integration
  • LoRA model support
  • Batch generation capabilities
4

Generate and Download

Run your workflow in ComfyUI and download the generated images.
Images are saved locally on your GPU instance and can be downloaded via the ComfyUI interface or SSH.

Advanced Prompt Techniques

Prompt Structure Best Practices

Be specific about the main subject:Vague: “A person” ✅ Specific: “A middle-aged woman with curly red hair wearing a blue dress”Generic: “A building”
Detailed: “A modern glass skyscraper with geometric patterns”
Describe what the subject is doing:
  • “running through a field”
  • “sitting peacefully by a window”
  • “dancing in the rain”
  • “looking directly at the camera”
  • “reaching toward the sky”
Set the scene and atmosphere:
  • “in a mystical forest with glowing mushrooms”
  • “on a busy city street at night”
  • “in a cozy library with warm lighting”
  • “against a dramatic storm sky”
  • “in a minimalist white studio”
Specify the artistic style:Photography Styles:
  • “professional portrait photography”
  • “street photography, candid moment”
  • “macro photography, extreme close-up”
  • “aerial photography, bird’s eye view”
Art Styles:
  • “oil painting in impressionist style”
  • “digital art, concept art style”
  • “watercolor illustration, soft edges”
  • “pencil sketch, detailed line art”
Add technical quality descriptors:
  • “8k ultra high resolution”
  • “cinematic lighting, dramatic shadows”
  • “shallow depth of field, bokeh background”
  • “HDR, vibrant colors, high contrast”
  • “soft natural lighting, golden hour”

Prompt Weighting and Control

Control emphasis with parentheses and weights:
(beautiful sunset:1.3) over (mountain lake:1.1), 
(dramatic clouds:0.9), peaceful atmosphere
Weight Guidelines:
  • (keyword:1.3) - Increase emphasis by 30%
  • (keyword:0.8) - Decrease emphasis by 20%
  • ((keyword)) - Strong emphasis (equivalent to 1.21)
  • [keyword] - Slight de-emphasis (equivalent to 0.91)

Model-Specific Guidelines

Stable Diffusion XL (SDXL)

Strengths

  • Exceptional detail and resolution
  • Great with complex compositions
  • Excellent text rendering in images
  • Superior photorealism capabilities

Optimal Settings

  • Steps: 25-35 (30 recommended)
  • Guidance Scale: 6-9 (7.5 recommended)
  • Resolution: 1024x1024 or 1152x896
  • Sampler: DPM++ 2M Karras
Best Prompting Practices:
# Good SDXL prompt structure
"[Detailed subject description], [specific artistic style], 
[lighting conditions], [composition notes], [quality modifiers]"

# Example
"Close-up portrait of an elderly craftsman with weathered hands, 
Renaissance painting style, dramatic chiaroscuro lighting, 
three-quarter view composition, oil painting texture, masterpiece quality"

Stable Diffusion 2.1

Strengths

  • Fast generation times
  • Good for iteration and experimentation
  • Reliable results with simple prompts
  • Cost-effective for batch generation

Optimal Settings

  • Steps: 20-30 (25 recommended)
  • Guidance Scale: 7-12 (9 recommended)
  • Resolution: 512x512 or 768x512
  • Sampler: Euler a or DPM++ 2M
Best Prompting Practices:
# SD 2.1 responds well to clear, direct prompts
"[Subject], [style], [lighting], [quality]"

# Example  
"Mountain landscape, digital art style, golden hour lighting, highly detailed"

DALL-E Style Model

Strengths

  • Excellent instruction following
  • Great with complex scene descriptions
  • Superior text integration
  • Photorealistic human faces

Optimal Settings

  • Steps: 30-40 (35 recommended)
  • Guidance Scale: 8-12 (10 recommended)
  • Resolution: 1024x1024, 1024x1792, 1792x1024
  • Sampler: DDIM or DPM++ SDE
Best Prompting Practices:
# DALL-E style model excels with natural language descriptions
"A photograph of [detailed scene description] captured with 
[camera/lens details], [lighting conditions], [mood/atmosphere]"

# Example
"A photograph of a cozy coffee shop interior during a rainy afternoon, 
captured with a 35mm lens, warm ambient lighting filtering through 
large windows, creating a peaceful and inviting atmosphere"

Generation Parameters

Resolution and Aspect Ratios

1:1 Aspect Ratio
ResolutionUse CaseCost
512x512Social media avatars, icons1x
768x768Instagram posts, thumbnails1.5x
1024x1024High-quality social media, prints2x
1536x1536Large prints, detailed artwork3x

Quality vs Speed Settings

For rapid iteration and concept exploration
  • Steps: 15-20
  • Guidance Scale: 6-8
  • Generation Time: 1-2 seconds
  • Cost: Standard pricing
Best for: Brainstorming, quick concepts, batch generation
For most production use cases
  • Steps: 25-35
  • Guidance Scale: 7-9
  • Generation Time: 2-4 seconds
  • Cost: Standard pricing
Best for: Social media, web use, general content creation
For professional and print applications
  • Steps: 40-50
  • Guidance Scale: 8-12
  • Generation Time: 4-8 seconds
  • Cost: 1.5x standard pricing
Best for: Print materials, professional work, detailed artwork
For the highest quality results
  • Steps: 60-80
  • Guidance Scale: 10-15
  • Generation Time: 8-15 seconds
  • Cost: 2x standard pricing
Best for: Gallery prints, commercial use, artistic masterpieces

Batch Generation and Variations

Generating Multiple Images

import requests

def generate_batch_images(prompt, count=4):
    response = requests.post(
        "https://api.comput3.ai/v1/generate/image",
        headers={"Authorization": "Bearer YOUR_API_KEY"},
        json={
            "model": "stable-diffusion-xl",
            "prompt": prompt,
            "width": 1024,
            "height": 1024,
            "num_images": count,
            "steps": 30,
            "guidance_scale": 7.5
        }
    )
    
    result = response.json()
    return [img["url"] for img in result["images"]]

# Generate 4 variations
images = generate_batch_images(
    "A futuristic city skyline at night, neon lights, cyberpunk style"
)

Seed Control and Reproducibility

Control randomness for reproducible results:
{
  "model": "stable-diffusion-xl",
  "prompt": "A red sports car on a mountain road",
  "seed": 123456789,
  "width": 1024,
  "height": 1024,
  "steps": 30,
  "guidance_scale": 7.5
}
Seed Benefits:
  • Reproduce exact same image
  • Create systematic variations
  • A/B test different parameters
  • Debug generation issues

Image Enhancement and Post-Processing

Upscaling and Super-Resolution

Enhance image resolution and quality
curl
curl -X POST "https://api.comput3.ai/v1/enhance/upscale" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image_url": "https://your-image-url.com/image.jpg",
    "scale_factor": 4,
    "model": "realesrgan"
  }'
Options:
  • Scale factors: 2x, 4x, 8x
  • Models: RealESRGAN, ESRGAN, SRCNN
  • Cost: $0.05 per upscale operation
Improve facial details and quality
{
  "image_url": "https://your-image-url.com/portrait.jpg",
  "enhancement_type": "face",
  "strength": 0.8,
  "preserve_identity": true
}
Features:
  • Enhance facial features and skin texture
  • Preserve original identity and characteristics
  • Adjustable enhancement strength
  • Cost: $0.03 per enhancement
Remove or replace backgrounds
response = requests.post(
    "https://api.comput3.ai/v1/enhance/background",
    headers={"Authorization": "Bearer YOUR_API_KEY"},
    json={
        "image_url": "https://your-image.jpg",
        "action": "remove",  # or "replace"
        "new_background": "solid_white"  # if replacing
    }
)

Style Transfer and Artistic Effects

Apply artistic styles to existing images
{
  "content_image": "https://your-photo.jpg",
  "style_image": "https://style-reference.jpg", 
  "strength": 0.7,
  "preserve_content": true
}

Commercial Use and Licensing

Usage Rights and Licensing

Images generated on Comput3 Network come with different licensing options depending on your plan and intended use.
Included with all plansAllowed:
  • Personal projects and portfolios
  • Educational and research purposes
  • Social media posts (personal accounts)
  • Non-commercial art and creativity
Not Allowed:
  • Commercial sales or licensing
  • Business marketing materials
  • Stock photo services
  • Resale or redistribution
Available with Pro and Enterprise plansAllowed:
  • Business marketing and advertising
  • Product packaging and branding
  • Website and app content
  • Print materials and merchandise
  • Client work and services
  • Stock photo creation
Additional Requirements:
  • Attribution may be required for certain uses
  • Some models have specific commercial restrictions
  • Enterprise plans include full commercial rights
Available as add-on for high-volume useIncludes:
  • Unlimited commercial usage
  • Resale and redistribution rights
  • No attribution requirements
  • White-label usage rights
  • Custom licensing terms available
Pricing: Contact sales for custom pricing

Model-Specific Considerations

Different AI models may have varying licensing restrictions. Always check the specific terms for the model you’re using.
Stable Diffusion and derivatives
  • Generally permissive licensing
  • Commercial use typically allowed
  • May require attribution in some cases
  • Check specific model cards for details

Troubleshooting and Tips

Common Issues and Solutions

Symptoms: Blurry, low-detail, or distorted imagesSolutions:
  • Increase the number of steps (30-50)
  • Adjust guidance scale (7-12 for most models)
  • Use negative prompts to exclude quality issues
  • Try different sampling methods
  • Increase resolution if budget allows
Negative prompt: blurry, low quality, distorted, bad anatomy, 
deformed, ugly, pixelated, grainy, artifacts
Symptoms: Generated image doesn’t match the descriptionSolutions:
  • Be more specific and detailed in prompts
  • Use attention weighting: (important detail:1.3)
  • Break complex prompts into simpler parts
  • Try different guidance scale values
  • Use negative prompts to exclude unwanted elements
Example Improvement: ❌ “A person in a room” ✅ “A young woman with brown hair sitting in a modern living room, natural lighting, realistic photography style”
Symptoms: Wildly different outputs for same promptSolutions:
  • Use seed values for reproducible results
  • Increase guidance scale for more prompt adherence
  • Use more specific and detailed prompts
  • Try different sampling methods
  • Generate multiple images and select best results
Symptoms: Incorrect hands, faces, or body proportionsSolutions:
  • Use negative prompts: bad anatomy, extra limbs, deformed hands
  • Try models specifically trained for human subjects
  • Use reference images or controlnets
  • Generate multiple versions and select best anatomy
  • Consider post-processing with face/hand enhancement

Optimization Tips

Cost Optimization

  • Start with lower resolution for iteration
  • Use faster models for experimentation
  • Batch similar requests together
  • Optimize prompts to reduce generation attempts

Quality Optimization

  • Spend time crafting detailed prompts
  • Use appropriate negative prompts
  • Choose the right model for your use case
  • Experiment with different parameters

Speed Optimization

  • Use lower step counts for drafts
  • Choose faster models when quality allows
  • Batch multiple images in single requests
  • Use appropriate resolution for final use

Workflow Optimization

  • Save successful prompts and settings
  • Use seeds for reproducible results
  • Create prompt templates for common use cases
  • Organize generated images with metadata

Next Steps