Audio attribution and payments for contributors
The royalty scheme will become active once Generative Music Features are rolled out on the ESSNCE app. Until then, royalty payouts will not be processed.
However, all data contributed by opted-in creators is being tracked from day one. If your sounds are used to train a model before the generative features launch, you will still be eligible for royalties retroactively. Any tracks used on current data will be fully applicable once payouts begin.
Your contributions count now. Opt in today and every sound you contribute will be attributed to you when royalties go live.
ESSNCE tracks every sound from the moment it enters the app to the moment it reaches a listener on a third-party platform. Here's the journey:
Capture audio from any source, record from mic, import files, or use the factory sound engine. The moment audio enters ESSNCE, it's yours.
Every sample gets a unique SHA-256 fingerprint based on its raw audio data. This is your proof of creation — immutable, permanent, automatic. First uploader owns the fingerprint.
Every time your sound gets loaded onto a pad, dropped into a kit, used in a pattern, or placed in an arrangement — we log it. Every usage event is linked back to the original creator via the fingerprint.
When a track containing your sounds gets published to a third-party platform (Audius, LANDR, YouTube) via our integrations, the royalty event triggers. Every contributor to that track gets credited proportionally.
Royalties accumulate in your account. When you hit the payout threshold, we transfer directly to you via Stripe Connect. No middlemen, no delays.
The foundation of the royalty system is the content fingerprint. Every sample that enters ESSNCE gets hashed using SHA-256 — the same cryptographic standard used by blockchains and security protocols.
There are two ways your work earns money in ESSNCE:
When a track that contains your sample gets published to a third-party platform through ESSNCE's integrations, you earn a share of the revenue proportional to your contribution to that track.
Every pad, every drum track, every arrangement block — if your sound is in the final export, you're credited.
When you use ESSNCE — building kits, finger drumming, tweaking patterns — you generate training data that improves our AI models (source separation, drum classification, smart suggestions).
Revenue from premium AI features is pooled and distributed proportionally to contributors each month.
We believe you deserve to see exactly what happens with your data. No black boxes. Here's the complete technical journey from your phone to your royalty balance — explained for humans.
You tap pads, build kits, program beats, capture audio, tweak filters. Every creative action generates a tiny data snapshot — not your audio, but the decisions you made.
Data is written to a numbered queue in your app storage. Each entry gets a unique contribution ID (UUID) stamped with your account. Nothing leaves your device yet. If you're offline for weeks, the queue just grows.
When your phone connects to WiFi, the queue flushes to our self-hosted database (not a third-party cloud). We send audio embeddings — 128 numbers that mathematically describe the character of your sound — not the actual audio. Think of it as a fingerprint of the vibe, not a recording.
Your data lands in one of 10 specialised tables — kit snapshots, drum patterns, performances, classifications, mix stems, factory feedback, sample lifecycles, pad layers, capture isolations, and usage events. Every row has your cloud_user_id so it's always traceable to you.
Each training cycle pools data from all contributors. The pipeline knows exactly which rows came from which user. It trains 5 models: drum classifier (kick vs snare), source separator (split stems), pattern generator (suggest beats), kit recommender (sounds that fit), and sample generator (synthesise new sounds).
Simple division. If a training cycle used 52,340 data points and 847 were yours, your weight is 1.62%. This gets recorded permanently in the contributions ledger — a receipt that can't be changed after the fact.
Revenue from premium AI features (source separation, auto-classification, smart pattern suggestions) accumulates in a monthly pool. At the end of each period, the pool is split across all contributors by weight.
Your weight (1.62%) × the pool ($820) = $13.28 credited to your royalty balance. When you pass the payout threshold, it transfers to your bank via Stripe Connect.
Here's a real example of what one kit snapshot looks like when it leaves your device. No audio — just metadata:
Every action in the app that involves your audio is logged to your usage ledger:
Royalties are triggered when work is published to a third-party platform through ESSNCE's integrations. Creating and exporting locally is always free — the royalty event only fires on platform publish.
Track everything in real time. Here's what the creator dashboard will look like:
Audio fingerprinting, asset registry, usage ledger, and AI training data collection are all running. Every sound you create is being tracked and attributed to your account right now.
Audius direct upload, LANDR mastering, YouTube export. When these go live, every publish triggers a royalty event for all contributors to the track.
In-app dashboard showing your earnings, usage stats, contribution history, and pending payouts. Real-time visibility into how your sounds are performing.
Connect your bank account via Stripe. When your balance hits the payout threshold, earnings transfer directly to you. No invoicing, no waiting.
Yes. You own everything you create, capture, or import. ESSNCE registers your ownership via fingerprint — we never claim rights to your audio. The royalty system is about paying you when your work generates revenue, not about ownership transfer.
The fingerprint system detects duplicate audio automatically. The first person to upload a unique piece of audio is registered as the creator. Later uploads of identical audio are linked back to the original.
No. Local exports are always free and unlimited. Royalty events only trigger when a track is published to a third-party platform through ESSNCE's built-in integrations (Audius, LANDR, YouTube).
Proportionally. If your sample makes up 1 of 8 sounds in a published track, you get 1/8 of the usage royalty pool for that publish event. AI credits are based on your contribution weight relative to all contributors in a training cycle.
Yes. You can disable ML data collection in settings. But opting out means you won't earn AI training credits. Your direct usage royalties are unaffected either way.
Yes — attribution tracking and AI data collection are live (Phase 1). When payouts launch, your historical contributions will count. The earlier you start creating, the more you accumulate.
TBD — we'll announce the threshold when Stripe Connect integration launches. It will be kept low to ensure creators get paid quickly.