Debug: Campaign Processing Steps

Campaign ID: 202

🔗 Direct Link

Step 1: Initialize Scraping Process

API Endpoint: api/start_background_scraping.php (POST)

Database Table: campaigns_impact (SELECT)

FieldValue
ID202
TitleTest Twitter Campaign - 2026-05-01 07:49:27
Keywords"Bombay Sweet Shop" OR #bombaysweetshop OR @bombaysweetshop OR bombaysweetshop.com
Platforms["twitter"]
Statusactive
Date From2026-04-24
Date To2026-05-01
Client KeywordsBSS, Bombay Sweets
Monitor LinksN/A
Relevancy Threshold0.70

â„šī¸ This prepares the campaign for data collection. This step must be completed before Step 2.

Step 2: Collect Data from Platforms

API Endpoint: api/collect_data_step2.php (POST)

Database Tables: twitter_raw, youtube_raw, instagram_raw, news_raw, blogs_raw, facebook_raw (INSERT, SELECT COUNT)

Database Table: background_jobs (SELECT)

â„šī¸ What this means: Background jobs track long-running scraping processes. If no jobs are found, it means either:
â€ĸ Scraping completed and jobs were cleaned up
â€ĸ Scraping happened directly without creating job records (this is normal)
â€ĸ No background scraping has been started yet
✅ Check Step 3 (Data Collection Status) to see if data was actually collected - that's what matters!

Job IDStatusCreated AtStarted At
53ee47e1-c5af-4299-ab37-780ee428ae87failed2026-05-01 07:49:272026-05-01 09:57:31

â„šī¸ This collects 100 results from each selected platform. This may take 10-20 minutes.

Data Collection Status by Platform

â„šī¸ Limits: Loaded from platform_limits table (managed via Settings > Platform Limit)

PlatformTable NameRecords CountLimitStatusAction
Twittertwitter_raw01000⚠ No Data
Youtubeyoutube_raw0100⚠ No Data
Instagraminstagram_raw0100⚠ No Data
Facebookfacebook_raw0100⚠ No Data
Newsnews_raw01000⚠ No Data
Blogsblogs_raw01000⚠ No Data
Total Records0-⚠ No Data Collected-

Step 3: AI Relevancy Analysis

API Endpoint: api/check_raw_mentions.php (POST)

OpenAI key: loaded from api_keys_db.api_keys (then OPENAI_API_KEY env, then config). Model: gpt-4o-mini.

Database Tables: twitter_raw, youtube_raw, instagram_raw, news_raw, blogs_raw, facebook_raw (SELECT), ai_relevancy_results (INSERT)

â„šī¸ Run sends every raw row to the API with force_reprocess (full pass, re-scores by AI). Re-Run does the same. Batches of 50; large campaigns can take 15+ minutes.

AI Relevancy Results

Database Table: ai_jobs (SELECT)

No AI jobs found for this campaign. Click the button above to create AI jobs from raw data.

Total AI Relevancy Results: 0

No AI relevancy results found. AI processing may not have completed yet.

Step 4: Complete Processing & Save Results

API Endpoint: api/sync_campaign_articles.php (POST)

Database Table: campaign_articles (INSERT/UPDATE, SELECT COUNT, SELECT)

â„šī¸ This finalizes all collected data and syncs relevant items to campaign_articles.

Campaign Articles (Final Processed Results)

Total Campaign Articles: 0

â„šī¸ To create campaign articles:
1. Process AI jobs (Step 3) to analyze relevancy
2. Items with relevancy score â‰Ĩ 70% (campaign threshold: 0.7) are considered relevant
3. Use the sync button above if relevant items exist in ai_relevancy_results

No campaign articles found. Final processing may not have completed yet.

Additional: Background Jobs Status

Database Table: background_jobs (SELECT)

â„šī¸ Background jobs track long-running scraping processes. This is optional tracking.

Job IDStatusCreated AtStarted At
53ee47e1-c5af-4299-ab37-780ee428ae87failed2026-05-01 07:49:272026-05-01 09:57:31

Additional: AI Processing Jobs Status

Database Table: ai_jobs (SELECT)

No AI jobs found for this campaign

📋 Complete Summary Report

📊 Processing Status Summary

MetricValue
Campaign ID202
Campaign Statusactive
Relevancy Threshold0.70
Total Raw Records Collected0
AI Relevancy Results0
Campaign Articles (Final)0
Background Jobs1
AI Jobs0

🌐 External APIs Used

API ServiceProviderUsage
apidojo~tweet-scraperApifyTwitter data collection (Step 3)
streamers~youtube-scraperApifyYouTube data collection (Step 3)
apify~instagram-post-scraperApifyInstagram data collection (Step 3)
ScrapingDog APIScrapingDogNews & Blogs data collection (Step 3)
OpenAI APIOpenAIAI relevancy analysis (Step 5)
Gemini APIGoogleAI relevancy analysis (Step 5, alternative)

📊 Database Tables Used

StepAPI EndpointTable NameOperation
Step 1api/start_background_scraping.phpcampaigns_impact (relevancy_threshold)SELECT
Step 2api/collect_data_step2.phptwitter_raw, youtube_raw, instagram_raw, news_raw, blogs_raw, facebook_rawINSERT, SELECT COUNT
Step 3api/check_raw_mentions.phptwitter_raw, youtube_raw, instagram_raw, news_raw, blogs_raw, facebook_raw, ai_relevancy_results (platform_post_id, content_hash, processing_state, raw_item_id, raw_table_name)SELECT, INSERT
Step 4api/sync_campaign_articles.phpcampaign_articles (platform_post_id, matched_terms, ai_reasoning, relevancy_label, sentiment_score, raw_item_id, raw_table_name)INSERT/UPDATE, SELECT COUNT, SELECT

🔄 Processing Flow Summary

StepDescriptionAPI EndpointDuration
Step 1Initialize scraping processapi/start_background_scraping.php10-20 seconds
Step 2Collect data from platforms (100 results per platform)api/collect_data_step2.php10-20 minutes
Step 3AI relevancy analysis (50 mentions per batch)api/check_raw_mentions.php5-7 minutes
Step 4Complete processing & save resultsapi/sync_campaign_articles.php1-2 minutes

✓ Complete!

All steps debugged successfully!

Campaign ID: 202 | Raw Records: 0 | AI Results: 0 | Articles: 0