This Data Security Policy describes the technical and organisational measures Adoptic employs to protect the confidentiality, integrity, and availability of data processed through the platform. It supplements our Privacy Policy and applies to all staff, contractors, and service providers.
2. Scope
All Personal Information and Client Data processed through adoptic.online
All application, assessment, and report data uploaded to or generated by the platform
All Derived Data produced by analytical processes
All uploaded documents and files
All infrastructure, systems, and services
All personnel with access to production systems or client data
All development, testing, and staging environments
3. Data Classification
Classification
Description
Handling
Confidential
Client data, Personal Information, Derived Data, uploaded documents, credentials
Encrypted at rest & in transit; access restricted; logged
Internal
System configurations, internal notes, analytics, error logs
Adoptic personnel only; not shared externally
Public
Anonymised reports, marketing materials, this policy
No restrictions
All data is Internal by default. Client Data and uploads are always Confidential.
4. Infrastructure Security
4.1 Hosting Environment
Component
Provider
Location
Application servers
AWS / Railway (transitioning to AWS)
[REGION] / US
Database
PostgreSQL on [AWS RDS / Railway]
[REGION]
File storage
[AWS S3]
[REGION]
4.2 Network Security
TLS 1.2+ enforced on all external traffic; HTTP redirected to HTTPS
Database connections use encrypted transport
Administrative access restricted by SSH key and IP allowlisting
No direct public access to databases or internal services
4.3 Server Hardening
OS and dependencies kept up to date with security patches
Unnecessary services and ports disabled
Production credentials in environment variables, never in source code
4.4 Environment Separation
Environment
Purpose
Data
Production
Live platform
Real client data (Confidential)
Staging
Pre-release testing
Synthetic / anonymised only
Development
Local development
Synthetic / anonymised only
Real Client Data is never used in non-production environments without explicit authorisation.
5. Application Security
5.1 Authentication
Passwords hashed with PBKDF2-SHA256 — never stored, logged, or transmitted in plain text
Secure, HTTP-only session cookies with CSRF protection
Invite-only account creation via time-limited, usage-limited tokens
Session tokens regenerated on auth state changes
[PLANNED] Multi-factor authentication
5.2 Authorisation
Role-based access control — admin, staff, viewer roles
Organisation-level isolation — queries scoped to user's memberships
Least privilege — minimum access for each role
Invite management — admins control who joins and at what level
5.3 Input Validation
Parameterised queries prevent SQL injection
Jinja2 auto-escaping prevents XSS
CSRF tokens on all state-changing requests
File upload validation for type and size
5.4 Secure Development
Version-controlled source code with code review
Secrets never committed to repositories
Dependencies reviewed and updated regularly
[PLANNED] Automated vulnerability scanning and periodic penetration testing
6. Data Protection
6.1 Encryption
State
Method
In transit
TLS 1.2+ (HTTPS)
At rest
AES-256 for database and storage volumes
Passwords
PBKDF2-SHA256 one-way hashing
Backups
Encrypted by hosting provider
6.2 Backups
Database backed up [FREQUENCY] by hosting provider
Encrypted, stored in a geographically separate location
Restoration tested [FREQUENCY]
Retained for [PERIOD] before automatic deletion
6.3 Data Isolation
Client data logically separated via foreign keys and application-level controls
All queries scoped to the authenticated user's organisation memberships
Anonymised demonstration data stored separately
6.4 Document & File Security
Uploaded documents in encrypted storage
Access requires authentication and organisation-level authorisation
[PLANNED] Malware scanning and file type restrictions
7. Data Science and Analytical Processing Security
7.1 Processing Controls
All analysis runs within the same secured infrastructure, with the exception of AI/LLM processing which is performed via Amazon Bedrock (see Section 7.5)
Data transmitted to Amazon Bedrock is sent over encrypted channels (TLS) to the Bedrock API within ap-southeast-2 (Sydney)
Algorithms maintained in version-controlled source code with review before deployment
Derived Data subject to the same access controls as input Client Data
7.2 Model & Algorithm Security
Proprietary algorithms in private, access-controlled repositories
Parameters and configurations classified as Internal
[PLANNED] Version tracking for auditability; bias and fairness monitoring
7.3 De-identification Standards
All direct identifiers removed; indirect identifiers assessed and suppressed
Minimum threshold of [X] records before publishing aggregates
De-identified datasets reviewed before external use
Re-identification prohibited and contractually enforced
7.4 Training Data Governance
Records maintained of training datasets; stored separately from production
Clients may opt out of de-identified data use for model improvement
7.5 AI/LLM Processing Security
Adoptic's analytical pipeline includes AI processing via Amazon Bedrock, a managed AI service by AWS.
Provider security:
Amazon Bedrock is SOC 2 Type II and ISO/IEC 27001 certified
Data encrypted in transit (TLS) and at rest; AWS does not store or retain inputs/outputs for model training
Input and output data is not shared with model providers (Anthropic) or across AWS customers — fully isolated per account
Authentication & access:
Bedrock API access authenticated via AWS IAM credentials (least-privilege policies)
Credentials stored in environment variables, never in source code
Data residency:
All AI processing confined to ap-southeast-2 (Sydney, Australia)
No Client Data transferred outside Australia for AI processing
Data handling:
No persistent storage of inputs or outputs by the LLM provider
Ephemeral prompt caching for system prompts only (within-session, not persisted)
Each LLM invocation is independent — no cross-client data leakage
Operational controls:
Rate limiting on LLM API calls; retry logic (3 retries, 30s delay) for transient failures
LLM outputs validated and structured before incorporation into results
Invocations logged for audit (metadata only, not input/output content)
8. Access Control
Admin access: Named individuals only, strong passwords + MFA, reviewed [FREQUENCY], all actions logged
Platform access: Client admins control membership; time-bound invitations; sessions expire after [DURATION]