Debug: Campaign Processing Steps

Campaign ID: 1

🔗 Direct Link

Step 1: Initialize Scraping Process

API Endpoint: api/start_background_scraping.php (POST)

Database Table: campaigns_impact (SELECT)

FieldValue
ID1
TitleTest Campaign 1
KeywordsN/A
PlatformsN/A
Statusactive
Date From2026-05-01
Date To2026-05-31
Client KeywordsN/A
Monitor LinksN/A
Relevancy Threshold0.70

â„šī¸ This prepares the campaign for data collection. This step must be completed before Step 2.

Step 2: Collect Data from Platforms

API Endpoint: api/collect_data_step2.php (POST)

Database Tables: twitter_raw, youtube_raw, instagram_raw, news_raw, blogs_raw, facebook_raw (INSERT, SELECT COUNT)

Database Table: background_jobs (SELECT)

â„šī¸ What this means: Background jobs track long-running scraping processes. If no jobs are found, it means either:
â€ĸ Scraping completed and jobs were cleaned up
â€ĸ Scraping happened directly without creating job records (this is normal)
â€ĸ No background scraping has been started yet
✅ Check Step 3 (Data Collection Status) to see if data was actually collected - that's what matters!

No background jobs found for this campaign

This is normal! Background jobs are optional tracking records. The important thing is whether data was collected - check Step 3 below.

â„šī¸ This collects 100 results from each selected platform. This may take 10-20 minutes.

Data Collection Status by Platform

â„šī¸ Limits: Loaded from platform_limits table (managed via Settings > Platform Limit)

PlatformTable NameRecords CountLimitStatusAction
Twittertwitter_raw01000⚠ No Data
Youtubeyoutube_raw0100⚠ No Data
Instagraminstagram_raw5100✓ Data Collected
Facebookfacebook_raw3100✓ Data Collected
Newsnews_raw01000⚠ No Data
Blogsblogs_raw01000⚠ No Data
Total Records8-✓ Data Available-

Step 3: AI Relevancy Analysis

API Endpoint: api/check_raw_mentions.php (POST)

OpenAI key: loaded from api_keys_db.api_keys (then OPENAI_API_KEY env, then config). Model: gpt-4o-mini.

Database Tables: twitter_raw, youtube_raw, instagram_raw, news_raw, blogs_raw, facebook_raw (SELECT), ai_relevancy_results (INSERT)

â„šī¸ Run sends every raw row to the API with force_reprocess (full pass, re-scores by AI). Re-Run does the same. Batches of 50; large campaigns can take 15+ minutes.

AI Relevancy Results

Database Table: ai_jobs (SELECT)

No AI jobs found for this campaign. Click the button above to create AI jobs from raw data.

Total AI Relevancy Results: 0

No AI relevancy results found. AI processing may not have completed yet.

Step 4: Complete Processing & Save Results

API Endpoint: api/sync_campaign_articles.php (POST)

Database Table: campaign_articles (INSERT/UPDATE, SELECT COUNT, SELECT)

â„šī¸ This finalizes all collected data and syncs relevant items to campaign_articles.

Campaign Articles (Final Processed Results)

Total Campaign Articles: 2

Breakdown by Platform:

PlatformArticles Count
instagram1
twitter1

Sample Articles (Latest 5):

IDPlatformPlatform Post IDTitle/ContentRelevancy ScoreRelevancy LabelSentimentSentiment ScoreRaw Item IDRaw TableCreated At
49410twitterN/ABreaking: Our new feature is now live! 🚀 This is g...0.8800N/Apositive0.7800N/AN/A2026-04-22 13:29:23
49409instagramN/ACheck out our latest product launch! Amazing featu...0.9000N/Apositive0.8500N/AN/A2026-04-22 13:29:23

Additional: Background Jobs Status

Database Table: background_jobs (SELECT)

â„šī¸ Background jobs track long-running scraping processes. This is optional tracking.

No background jobs found for this campaign

Additional: AI Processing Jobs Status

Database Table: ai_jobs (SELECT)

No AI jobs found for this campaign

📋 Complete Summary Report

📊 Processing Status Summary

MetricValue
Campaign ID1
Campaign Statusactive
Relevancy Threshold0.70
Total Raw Records Collected8
AI Relevancy Results0
Campaign Articles (Final)2
Background Jobs0
AI Jobs0

🌐 External APIs Used

API ServiceProviderUsage
apidojo~tweet-scraperApifyTwitter data collection (Step 3)
streamers~youtube-scraperApifyYouTube data collection (Step 3)
apify~instagram-post-scraperApifyInstagram data collection (Step 3)
ScrapingDog APIScrapingDogNews & Blogs data collection (Step 3)
OpenAI APIOpenAIAI relevancy analysis (Step 5)
Gemini APIGoogleAI relevancy analysis (Step 5, alternative)

📊 Database Tables Used

StepAPI EndpointTable NameOperation
Step 1api/start_background_scraping.phpcampaigns_impact (relevancy_threshold)SELECT
Step 2api/collect_data_step2.phptwitter_raw, youtube_raw, instagram_raw, news_raw, blogs_raw, facebook_rawINSERT, SELECT COUNT
Step 3api/check_raw_mentions.phptwitter_raw, youtube_raw, instagram_raw, news_raw, blogs_raw, facebook_raw, ai_relevancy_results (platform_post_id, content_hash, processing_state, raw_item_id, raw_table_name)SELECT, INSERT
Step 4api/sync_campaign_articles.phpcampaign_articles (platform_post_id, matched_terms, ai_reasoning, relevancy_label, sentiment_score, raw_item_id, raw_table_name)INSERT/UPDATE, SELECT COUNT, SELECT

🔄 Processing Flow Summary

StepDescriptionAPI EndpointDuration
Step 1Initialize scraping processapi/start_background_scraping.php10-20 seconds
Step 2Collect data from platforms (100 results per platform)api/collect_data_step2.php10-20 minutes
Step 3AI relevancy analysis (50 mentions per batch)api/check_raw_mentions.php5-7 minutes
Step 4Complete processing & save resultsapi/sync_campaign_articles.php1-2 minutes

✓ Complete!

All steps debugged successfully!

Campaign ID: 1 | Raw Records: 8 | AI Results: 0 | Articles: 2