top of page

CertiSight AI Reports
Decision‑ready image authenticity reports. We blend multiple AI detectors with traditional forensics and human analysis to give a clear verdict, confidence score, and the evidence behind it.

1. Purpose
​
Synthetic images and subtle manipulations are increasing quickly. Traditional photo forensics alone is not enough for AI imagery. CertiSight AI combines multiple specialist detectors, classic forensic tests, reverse search, and human review to produce a single, decision‑ready conclusion. Every report documents the exact evidence behind the call, with reproducible steps and timestamps.
2. Report tiers
Choose the depth that matches your risk, deadline, and budget.
2.1 Quick Screen
Best for newsroom triage, social teams, trust and safety queues.
-
Turnaround: typically under 2 hours for single images.
-
Output: short PDF with verdict, confidence score, key detector results, essential reverse search hits, and a basic manipulation check.
-
When to use: rapid publish decisions, social virals, low legal exposure.
2.2 Forensic Report
Deep analysis suitable for high‑visibility stories and internal investigations.
-
Turnaround: 4 to 24 hours depending on queue and image count.
-
Output: full PDF with methodology, multi‑model detection, manipulation suite, reverse search timeline, metadata analysis, artifact maps, and detailed notes. Includes a review by a senior analyst.
-
When to use: broadcast segments, front‑page stories, crisis comms, due diligence, NGO fact‑checking.
2.3 Enterprise or Legal‑Grade Report
Highest level for compliance, disputes, and regulator submissions.
-
Turnaround: 24 to 72 hours, rush available.
-
Output: expanded PDF with chain of custody, cryptographic hashes of evidence files, sworn analyst declaration on request, reproducibility appendix, and retention schedule. Optional live briefing.
-
When to use: legal preparation, fraud disputes, brand protection, platform escalations, law‑enforcement referrals.
3. What is inside a report
Each report contains the sections below. Depth varies by tier.
-
Case summary
-
Who requested, date received, timezone, requested deadline.
-
Source description and claimed context of the image.
-
Report ID, image hashes (SHA‑256), and file inventory.
-
-
Verdict and score
-
Primary conclusion and confidence score.
-
A short justification listing the strongest signals.
-
-
Generation detection
-
Results from multiple AI‑image detectors, including third‑party models and our proprietary classifier trained on hard real‑world cases that fool common tools. Scores reported individually, followed by a calibrated ensemble score.
-
-
Manipulation detection
-
Tests for splices, content‑aware fills, upscaling, and local edits on camera images. Includes error level analysis, noise and CFA analysis when applicable, lighting and shadow consistency checks, and face region scrutiny.
-
-
Reverse image and web trace
-
Reverse search hits across open platforms. First‑seen timestamps where available. Variants, crops, upscaled versions, and cross‑angle comparisons. We note if the earliest appearance was on an AI gallery or generator feed.
-
-
Metadata and file forensics
-
EXIF and XMP fields, camera and lens tags if present, software history, ICC profiles, quantization tables, compression signatures, and anomalies. Flags common to generators, editors, or messaging apps.
-
-
Artifact maps and visuals
-
Heatmaps for detector attention, residual and noise maps, ELA layers, PRNU correlation status where applicable, and crop overlays that show exactly where signals were found.
-
-
Contextual checks
-
Geographic plausibility, weather or celestial cross‑checks when relevant, uniform and insignia reference lookups, and depth or reflection consistency for high‑stakes cases.
-
-
Limitations and residual risk
-
A clear statement of what the tests cannot guarantee and what further evidence would reduce risk.
-
-
Chain of custody
-
How we received the file, storage location, hash values, analyst actions with timestamps, and export hashes for delivered artifacts.
4. Verdict scale and how to read it
​
We express conclusions on a five‑point scale. Each level also lists typical next steps.
-
Authentic
-
Evidence strongly favors a camera‑captured image with no material edits. Multiple detectors indicate non‑AI. Metadata and artifacts match camera output. Reverse search shows a plausible capture timeline.
-
Next steps: publish with standard caution. Keep the report on file.
-
-
Likely authentic
-
Majority of evidence points to an authentic capture. Minor artifacts or editing traces are consistent with routine processing such as cropping or contrast. Detectors either agree or are weakly conflicting.
-
Next steps: publish if risk is acceptable. Consider asking the source for an original file or burst sequence.
-
-
Inconclusive
-
Signals conflict or the file quality is too low. Messaging platforms often strip metadata and add compression that obscures traces.
-
Next steps: request the original file and the capture device details. Ask for adjacent frames or live photos. Run a follow‑up analysis.
-
-
Likely AI‑generated or materially manipulated
-
Strong but not absolute signals of generation or heavy edits. Ensemble score above our action threshold. Reverse search may show similar synthetic variants.
-
Next steps: avoid publication as real. Consider framing as unverified or suspected synthetic. Seek admission or logs from the creator if possible.
-
-
AI‑generated or materially manipulated
-
Consistent strong signals across detectors and forensics. Context checks fail or show physical impossibilities. Reverse search finds generator or prompt trail.
-
Next steps: treat as synthetic. Use the report for takedowns, corrections, or legal steps.
-
5. Our detection stack
​
We use a layered approach. Specific tools evolve as the field moves. At the time of writing our stack includes:
​
-
Third‑party generation detectors with proven benchmarks.
-
Custom proprietary classifier trained on hard cases that bypass common detectors. Includes explainable outputs with region heatmaps.
-
Manipulation analysis using industry techniques such as noise and CFA analysis, ELA, resampling and JPEG ghost detection, and face region scrutiny.
-
Reverse image search across multiple engines with timeline building.
-
LLM‑assisted reasoning that aggregates all signals and explains the decision in plain language. Human analysts make the final call.
We continuously retrain on fresh data and incorporate new model families when they prove reliable. Version numbers and model names are listed in each report for transparency.
6. Confidence scoring
​
-
Each detector outputs a probability or score on its native scale.
-
Scores are calibrated against our internal validation set and mapped to a common 0 to 100 scale.
-
The ensemble score is a weighted combination that favors detectors with the best performance on current model families. We also discount correlated detectors to avoid double counting.
-
The final confidence is adjusted only when human review finds strong contextual evidence that a model cannot see.
7. Evidence handling and privacy
​
-
All files are hashed on receipt. We store original evidence in encrypted storage.
-
We minimize uploads to third‑party tools. Where an external model must be used, we remove personal identifiers and follow data protection best practices.
-
Retention: Quick Screens 90 days. Forensic and Legal reports 12 months by default. Custom retention available under contract.
-
We can sign NDAs and provide on‑prem or air‑gapped processing for sensitive matters.
8. Turnaround times
​
-
Quick Screen: under 2 hours for a single image.
-
Forensic: 4 to 24 hours depending on volume and complexity.
-
Enterprise or Legal: 24 to 72 hours with optional expedited processing. Large batches and video work are scheduled with a service‑level agreement.
9. Pricing overview
​
Pricing varies by tier, urgency, and volume. Typical ranges
-
Quick Screen: entry level per image with volume discounts.
-
Forensic: mid tier per image or set. Batch pricing available.
-
Enterprise or Legal: premium pricing reflecting analyst time, declarations, and chain of custody. Contact us for a quote or a yearly plan if you have ongoing needs.
10. What we do not claim
​
-
No detector can guarantee 100 percent certainty. Model families change quickly and novel attacks appear. We disclose uncertainty clearly and recommend additional evidence when necessary.
-
We do not attribute content to a specific individual without supporting logs or admissions. We focus on authenticity rather than creator identity.
11. How to request a report
​
-
Email contact@certisightai.com with the image, context, and your deadline.
-
If possible provide the original file from the capture device, not a screenshot or a social media download.
-
Tell us how you plan to use the result. We will match the tier and turnaround.
-
You receive a secure download link to the PDF report and evidence bundle. We can also brief your team live.
​
bottom of page