1) Purpose and Safety Commitments
- Protect users, especially minors, from harm and exploitation.
- Uphold free expression while preventing illegal or abusive content.
- Provide predictable, consistent enforcement and clear avenues for reporting and appeal.
- Meet or exceed applicable legal and platform obligations worldwide.
- Continuously improve through data, audits, and community feedback.
2) Definitions
- Content: Any user-generated audio, video, text, images, overlays, gifts, commerce listings, and links, including off-platform content directly connected to a stream.
- Live content: Real-time broadcast plus co-streams, guest appearances, and screen shares.
- Replay/VOD: Archived live streams and highlights.
- Chat: Real-time comments, stickers, gifts, and reactions.
- Prohibited content: Not allowed under any circumstances.
- Restricted content: Allowed only with conditions (e.g., age-gating, labels, contextual limits).
- Minors: Individuals under 18, unless local law defines otherwise.
- Severe harm (S0/S1): Imminent risk to life or illegal content (e.g., CSAM, active threats).
3) Legal and Standards Alignment
- Child safety: COPPA, UK Online Safety Act, EU DSA, and mandatory reporting to NCMEC or local equivalent.
- Privacy and data: GDPR, CCPA/CPRA, PIPEDA, LGPD, ePrivacy, and telecom/broadcasting rules where applicable.
- IP and takedowns: DMCA/EU Copyright directives; repeat infringer policies.
- Counter-terrorism: GIFCT standards and hash-sharing as applicable.
- Consumer and advertising: Local consumer protection, gambling, and alcohol marketing laws.
- Platform distribution: App Store/Google Play developer policies.
- Where requirements conflict, the stricter standard applies.
4) How Monitoring Works (Multi-Layer Safety)
A. Pre-Live
- Account risk checks: KYC/age verification for monetization; sanctions screening; device and behavior risk scores.
- Creator onboarding: Safety training, policy acknowledgement, and feature-gating for new accounts.
- Stream setup scans: Title, description, thumbnail, and scheduled guests scanned for risk signals.
B. During Live
- Safety buffer: A short latency buffer (e.g., 3–10 seconds) to enable real-time intervention.
- Automated signals: ASR (speech-to-text), CV (nudity/violence), OCR (on-screen text), link scanning, spam/bot detection, and keyword filters with context-aware models.
- Human-in-the-loop: Real-time moderator dashboards with instant mute/blur/end-stream controls, escalation paths, and priority queues for severe harm.
- User tools: One-click report, block, mute, hide chat, and opt-in content controls (e.g., profanity filter).
- Creator tools: Slow mode, blocked words list, chat-only for followers/subscribers, guest approval, co-host management, and screen share restrictions.
C. Post-Live
- VOD scan: Re-analysis of the full stream and chat for issues missed in real-time.
- Retrospective enforcement: Takedowns, age-gating, demonetization, and education for borderline violations.
- Evidence preservation: Secure retention for investigations, law enforcement requests, and appeals.
D. Triage and SLAs (targets)
- S0 imminent harm/CSAM/violent threats: automated halt + moderator within 1 minute; law enforcement escalation as required.
- S1 illegal content/hate/sexual exploitation/terror propaganda: action within 5 minutes during live.
- S2 abusive, dangerous, or fraud: within 30 minutes during live; 24 hours for VOD.
- S3 policy/quality issues: within 72 hours.
5) Content Standards
5.1 Prohibited Content (no exceptions)
- Child sexual exploitation and abuse (CSAM); sexualization of minors; grooming; nudity or suggestive posing of minors; sexual comments toward minors; facilitating meetings with minors.
- Sexual content involving explicit nudity or sex acts; pornography; sexual services; sexual extortion and non-consensual intimate imagery.
- Incitement or credible threats of violence; glorification of violent acts; instructions to commit violence.
- Terrorism/violent extremism: praise, support, recruitment, fundraising, or symbols used to promote such groups (contextual news or academic discussion only with careful, non-promotional framing).
- Human trafficking and exploitation; doxxing, extortion, blackmail.
- Illegal or regulated goods/services: sale or solicitation of illegal drugs, prescription drugs, counterfeit goods, hacked data, explosives; instructions to create illegal weapons or evade law enforcement.
- Severe harassment and hate: dehumanization, slurs, or segregation against protected classes (e.g., race, ethnicity, nationality, religion, gender, gender identity, sexual orientation, disability, serious disease); denial or praise of genocides/atrocities.
- Graphic violence and gore meant to shock or disgust; live execution or torture imagery.
- Self-harm and suicide content that depicts, encourages, or instructs self-harm; “suicide games” or challenges.
- Non-consensual behavior: sexual content without consent, hidden cameras, deepfakes of real persons without consent.
- Fraud, scams, and deceptive practices: phishing, impersonation of staff or creators, pyramid schemes, romance scams.
- Invasion of privacy: sharing personal data (doxxing), medical or financial records, home addresses or precise geolocation without consent.
- Evasion: ban evasion, platform manipulation, coordinated inauthentic behavior, misuse of reporting systems.
5.2 Restricted Content (conditions apply; may be age-gated, limited, labeled, or demonetized)
- Adult themes without explicit nudity (e.g., dating talk, non-explicit art/education): must avoid sexualization of minors and fetish content; apply age-gating and appropriate labels.
- Non-graphic depictions or discussions of violence in a news/educational context: contextual framing required; no praise or instructions.
- Weapons handling in educational/sporting context: comply with laws; no minors handling; no instructions to bypass safety or laws; safety gear required.
- Alcohol, tobacco, and vaping: no minors; no binge-drinking games; local law compliance; demonetization likely.
- Gambling and simulated gambling: licensed operators only; strict age-gating, geo-fencing; disclosures of odds and risk; no minors.
- Dangerous activities and stunts: professional settings only with safety gear and visible precautions; no encouragement of imitation; age-gate and demonetize.
- Health and medical information: no false claims of cures; no promotion of unsafe treatments; include disclaimers; cite reputable sources when educational.
- Political content: disallow paid political ads unless approved with jurisdictional compliance; require disclosures for sponsored political content; civic misinformation is prohibited (see 5.3).
5.3 Misinformation and Integrity
- Prohibited: demonstrably false claims that risk imminent harm (e.g., dangerous medical disinformation, instructions to interfere with civic processes).
- Civic integrity: disallow content that discourages lawful voting or spreads false voting logistics; no coordinated manipulation.
- Allowed with context: good-faith debate, opinion, satire; apply labels and reduce reach for borderline cases.
5.4 Intellectual Property and Counterfeits
- Unauthorized rebroadcasts, music, or video; counterfeit goods sales; DMCA/EU procedures honored.
- Repeat infringers face escalating penalties, up to account termination.
6) Minors’ Safety (heightened protections)
- Eligibility: under 13 not allowed; 13–17 limited features; parental/guardian consent where required by law.
- Appearance of minors: no sexualization; age-appropriate clothing and activities; no suggestive dances; no gifts that could be construed as sexual.
- Interactions: strict filtering of chat; default blocked-words list; follower-only chat; no private off-platform contact encouragement.
- Monetization: limited or disabled for minors; no direct solicitation of gifts; guardian-managed payouts where permitted.
- Location and privacy: no sharing of school, address, real-time location, or schedules.
- Grooming detection: proactive signals monitored; fast escalation to Safety and, when required, to authorities.
7) Creator Responsibilities
- Acknowledge policy on first stream and after each major update.
- Use safety tools: slow mode, blocked terms, guest approval, and age-appropriateness.
- Manage guests and co-hosts: hosts are responsible for content on their stream, including guest behavior and on-screen overlays.
- Disclose sponsorships and paid promotions.
- Avoid encouraging risky challenges; do not offer rewards for violating policies.
- Respect privacy: obtain consent before filming others; avoid sensitive locations.
8) Chat and Community Standards
- No harassment, hate, threats, or sexual advances; especially toward minors.
- No spam, flooding, link bait, or malware links; URL shorteners may be restricted.
- No doxxing or sharing personal data.
- Moderation tools: temporary and permanent timeouts, follower/subscriber-only modes, link-only for trusted roles, and keyword filters.
9) Off-Platform Links and Behavior
- Disallow links to illegal content, piracy, or adult services; block known bad domains.
- Off-platform behavior that poses clear risk (e.g., organizing violence, exploitation) may result in on-platform enforcement where supported by evidence.
10) Commerce, Gifts, and Anti-Fraud
- Prohibit sale of illegal/regulated goods unless compliant and pre-approved.
- Anti-money-laundering: KYC for payouts, sanctions screening, unusual activity detection.
- Gifts: no coercive solicitation; prevent “pay-to-humiliate” or unsafe acts in exchange for money; stricter rules for minors.
- Transparent fees and refund policies; investigator review of chargebacks and scams.
11) Enforcement Actions (graduated and proportional)
Possible actions (applied to content, stream, account, and devices):
- Stream-level: blur, mute, remove chat, disable screen sharing, end stream.
- Content-level: removal, age-gating, labeling, reduced discoverability, demonetization.
- Account-level: warnings, feature limits, temporary suspensions, permanent bans, monetization removal, payout holds, device bans for severe cases.
- Network-level: IP/range limits, link/domain blocks, hash-matching for previously removed content.
Strikes and repeat-offender policy:
- Non-severe violations: 1st warning, 2nd temporary suspension (e.g., 1–7 days), 3rd longer suspension (e.g., 14–30 days), 4th permanent ban. Strikes may expire after a set period if no new violations.
- Severe violations (e.g., CSAM, terror propaganda, credible violent threats, sexualization of minors): immediate permanent ban; report to authorities as required.
12) Reporting, Appeals, and User Controls
- Reporting: in-stream and profile-level reporting with clear categories and space for evidence; easy report for minors’ safety.
- User controls: block, mute, hide content, topic and keyword filters, sensitive content opt-in.
- Acknowledgement: users receive confirmation and a reference ID.
- Appeals: available for most enforcement actions except where legally restricted; decisions targeted within 7–14 days; multi-level review for permanent bans.
- Notice to users: explain what was removed, why, and how to appeal; include snippet or policy reference where feasible.
- Trusted flaggers: expedited channels for vetted partners (NGOs, regulators) with high-accuracy expectations and audits.
13) Crisis and Emergency Protocols
- Imminent self-harm or threats to others: immediate stream halt, wellness resources surfaced, specialized moderators engaged, and emergency escalation according to local law and platform capabilities.
- Child safety: immediate removal, account termination, evidence preservation, mandatory reporting (e.g., NCMEC in the U.S.).
- Real-world emergencies: if a user seeks urgent help on-stream, prompt them to contact local emergency services; provide in-product resources when available.
- Documentation: maintain secure incident logs and decisions for audit.
14) Moderator Operations and Wellbeing
- Training: legal basics (CSAM, privacy, IP), cultural context, bias mitigation, mental health first aid, and policy calibration with examples.
- Wellbeing: rotations, content blurring by default, counseling access, mandatory breaks, exposure limits to graphic material.
- Quality assurance: sampling, double-blind reviews, calibration sessions, measurable accuracy targets.
- Vendor management: confidentiality, data security, performance SLAs, and audit rights for third-party moderation partners.
- Conflict of interest: moderators may not review accounts where a conflict exists.
15) AI and Automation Governance
- Human-in-the-loop: automated systems do not make irreversible decisions on borderline cases.
- Transparency: disclose automated moderation use in policy center.
- Bias and performance: measure false positive/negative rates across languages and demographics; run periodic fairness audits.
- Safety thresholds: err on the side of caution for S0/S1 categories; ensure quick human override.
- Model updates: staged rollouts, backtesting, and post-release monitoring.
16) Data Governance and Privacy
- Data minimization: collect only what is necessary for safety and operations.
- Security: encryption in transit and at rest; strict access controls; audit logs.
- Retention: store removed content and logs only as long as needed for appeals, legal compliance, and investigations (e.g., 90 days standard; longer for severe cases or legal holds).
- Anonymization: use aggregated or anonymized data for transparency reports where possible.
- User rights: enable access, deletion, and correction requests in compliance with local laws.
- Law enforcement: clear process for emergency requests and legal orders; require appropriate legal process; transparency reporting unless law prohibits notice.
17) Accessibility and Localization
- Provide policy and reporting in supported languages with accurate translations.
- Offer accessible reporting options (e.g., screen reader support, clear contrast, captions).
- Adjust enforcement and content labels to regional legal requirements while upholding global safety baselines.
18) Transparency and Accountability
- Publish regular transparency reports: volumes of removals, categories, appeals and outcomes, government requests, error rates where feasible.
- Provide a policy change log and anticipated timelines for major updates.
- Engage with civil society and expert advisory groups; integrate feedback.
19) Policy Governance and Updates
- Ownership: Trust & Safety leads; Legal approves; cross-functional input from Product, Engineering, Support, and Communications.
- Review cadence: at least every 6 months or after significant regulatory changes or incidents.
- Change management: train moderators and creators on updates; in-product notifications; update help center and onboarding.
20) Glossary and Additional Guidance
- Protected characteristics: race, ethnicity, nationality, religion, caste, gender, gender identity, sexual orientation, disability, serious disease, and other traits protected by law.
- Contextual exceptions: news, documentary, academic, and artistic contexts may be allowed if non-promotional, necessary to the narrative, and appropriately labeled/age-gated.
- Edge cases: when in doubt, limit exposure (age-gate, reduce reach), add context labels, and escalate for human review.
Operational Playbooks (summarized)
- S0/S1 playbook: automated halt, immediate human review, secure evidence capture, mandatory reporting where applicable, legal consultation.
- Child safety playbook: triage within 1 minute; block re-uploads via hashing; notify trust partners as appropriate; preserve chain-of-custody.
- IP playbook: standardized notice-and-takedown; counter-notice; repeat infringer tracking; geo-restrictions if required by rights holder.
- Election integrity playbook: fast-track queues near elections; curated authoritative info panels; stricter misinformation thresholds during sensitive windows.
- Health misinformation playbook: partner with recognized health authorities; replace harmful claims with authoritative resources; demonetize borderline content.
Creator-Facing Summary (to surface in Help Center)
- Do: keep streams respectful, use safety tools, label sensitive content, follow local laws, and protect privacy.
- Don’t: sexualize minors, share private information, promote hate/violence, show explicit sexual content, encourage dangerous behavior, sell illegal goods, or spread harmful misinformation.
- If something goes wrong: end the segment, use the “panic” or “pause” tool, remove offending guests, and contact support.
Implementation Notes (engineering/product)
- Maintain a short live delay and instant enforcement controls.
- Provide per-language models; fall back to human escalation when confidence is low.
- Expose robust creator and viewer safety controls by default.
- Instrument metrics: detection precision/recall, enforcement latency, appeal reversal rates, and user safety satisfaction.
By using Coco -Live Stream & Video Chat, users agree to follow this policy and accept enforcement outcomes for violations. The platform will apply this policy consistently, prioritize user safety, and remain transparent about its decisions and processes.