https://a.storyblok.com/f/288445874393499/1874x831/c75193560f/blog-header.png

Building a market research platform with Browser Cash deep research

product release5 mins read

This guide walks through building a comprehensive market research platform that generates detailed industry reports using Parallel's Deep Research product. The application demonstrates how to create a production-ready system that handles real-time streaming, intelligent input validation and email notifications.

Key Features

AI-Powered Research: Uses Parallel's Deep Research API with "ultra2x" processor for comprehensive market analysis
Real-Time Progress Streaming: Server-Sent Events (SSE) for live task progress updates with source tracking
Email Notifications: Optional email alerts via Resend API when reports are ready
Public Report Library: Browse all generated reports without any authentication required
Global Access: No authentication needed - anyone can generate and view reports
Interactive Dashboard: Clean, modern web interface with real-time progress visualization
Download Support: Export reports as Markdown files
Shareable URLs: Each report gets a unique URL slug for easy sharing
Input Validation: Low-latency validation of inputs via Parallel's Chat API
Platform Architecture
This market research platform is designed for production use with several key architectural decisions:

Core Components

Flask Backend: Python web framework handling API requests and task management
PostgreSQL Database: Unified schema storing both running tasks and completed reports
Real-time Streaming: Server-Sent Events for live progress updates during research
AI Validation: Parallel's Chat API for intelligent input filtering
Email System: Resend API for user notifications when reports complete
Public Library: Persistent storage enabling report sharing and discovery

Design Patterns

The platform implements two key production patterns that ensure reliability. First, multi-layer task completion – Tasks are monitored through background thread monitoring and each run ID is stored upon completion, allowing for state recovery if disconnected or on failure. This allows for the lower-latency ultra processors to complete gracefully and ensures reports can be tracked and kicked off concurrently.

Next, intelligent input validation – as a public application, it’s important to ensure the data quality to end-users is high. The Chat API is used for a low-latency verification system that checks that the inputs – market name, question, region – fit within the focus of the app, protecting against unrelated data populating the public library. This 2-step process (low-latency validation paired with high-latency deep research) is a helpful framework that can provide meaningful improvements to user experiences in other applications.

Placeholder

Author: Andrea Giuffrida