The Imitation Game

Native Engine vs Wire-Protocol Emulation

A deep technical analysis of MongoDB Atlas, Azure Cosmos DB (vCore & RU), and Amazon DocumentDB — from architecture internals to real-world operational gotchas.

0%
MongoDB Feature Parity
(Atlas)
0
Physical Shard Ceiling
(Cosmos vCore)
0
Partition Size Limit
(Cosmos RU)
0
Cloud Providers
(Atlas Multi-Cloud)
Compare with:
Cosmos DB vCore
Cosmos DB RU
DocumentDB
Native MongoDB

MongoDB Atlas

WiredTiger Engine · B+ tree · MVCC
Azure · vCore

Cosmos DB for MongoDB (vCore)

Custom Engine · Wire protocol adapter · Managed disk
Azure · Serverless

Cosmos DB for MongoDB (RU)

ARS Engine · Multi-model · Request Units
AWS

Amazon DocumentDB

Aurora-style Engine · Log-structured storage · Compute–storage split

Architecture & Storage Engine

The engine underneath determines everything — concurrency, compression, feature parity, and upgrade path.

Dimension MongoDB Atlas Cosmos DB vCore Cosmos DB RU DocumentDB
Storage Engine WiredTiger (B+ tree, MVCC, document-level locking) Custom engine behind MongoDB wire protocol adapter Atom-Record-Sequence (ARS) — multi-model engine shared across SQL, Cassandra, Gremlin APIs Aurora-style log-structured distributed storage; compute layer emulates wire protocol
JSON/BSON Handling Native BSON throughout — storage, wire protocol, indexes. Full type fidelity (128-bit Decimal128, BinData, etc.) Wire protocol translates BSON to internal format. Most BSON types supported; some edge cases differ Stores data in ARS format. BSON types mapped internally; some type coercions (e.g., Decimal128 precision differences reported) Internal JSON-like representation. Most BSON types supported but behavior around NaN, Infinity, and rare types may differ
Concurrency Document-level (WiredTiger tickets, read/write intent locks). True multi-granularity locking Managed by internal engine; less granular control exposed. No equivalent of WiredTiger tickets or server status concurrency metrics Request-unit scheduler. No concept of locks — throughput governed by provisioned RUs per partition Instance-level compute resources. No document-level locking semantics exposed; relies on compute-storage separation
Compression Snappy (default), zstd, zlib — configurable per collection and index. Typically 3–5x reduction Managed internally; no user control over compression algorithm Automatic — not user configurable Zstandard dictionary-based compression (v8.0). Up to 5x ratio reported
Replication Replica set with oplog-based replication. Configurable write concern (w:1, majority, etc.), read preference (primary, secondary, nearest) HA replicas managed by Azure. No direct oplog access. No read preference control Multi-region replication with 5 consistency levels. Change feed (not change streams). No oplog 6 storage copies across 3 AZs. Up to 15 read replicas. Change streams supported (with behavioral differences)
Upgrade Path In-place major version upgrades (7.0 → 8.0). Rolling upgrades with zero downtime on Atlas. Full access to every new MongoDB release Compatibility version bumps managed by Azure. Feature availability lags MongoDB releases by months to years Compatibility versions set per account. Significant feature gaps vs native MongoDB at same version Engine versions (5.0, 8.0) are DocumentDB releases, not actual MongoDB binaries. Features are re-implemented, so "8.0 compat" ≠ MongoDB 8.0

Wire protocol is the binary format MongoDB drivers use to communicate with a MongoDB server over TCP. When your app calls db.orders.find({status: "active"}), the driver serializes it into an OP_MSG binary message, sends it over the wire, and expects a binary response in the same format.

MongoDB Atlas runs the actual MongoDB server binary. The driver talks to the real engine. Every command, operator, and edge case works because the server defined the protocol.

Cosmos DB and DocumentDB built entirely different database engines and then wrote a translation layer that accepts MongoDB wire protocol messages and returns responses in the expected format. It's putting a MongoDB-shaped mask on a fundamentally different database.

The consequences are real:

1. Feature gaps are inevitable — every operator and aggregation stage must be individually reimplemented. If they haven't built it, you get a runtime error.

2. Behavioral differences are silent — for "supported" operations, results can differ subtly: sort order for mixed types, null handling, Decimal128 precision. Tests pass; production reveals the divergence.

3. Explain plans divergeexplain() output looks similar, but the underlying optimizer is completely different. An index that performs well on MongoDB might be ignored entirely.

4. Version numbers mislead — "DocumentDB 8.0 compatible" means their layer accepts connections from MongoDB 8.0 drivers. It does not mean they support MongoDB 8.0 features.

Analogy: It's like Google Docs claiming "Microsoft Word compatible" because it opens .docx files. Basic documents render fine. Complex macros, tracked changes, and advanced formatting break silently. The file format is the same; the engine is not.

Why the engine matters

Cosmos DB and DocumentDB do not run the MongoDB server. They implement the wire protocol on top of fundamentally different storage engines. This means every MongoDB feature must be explicitly re-implemented — and many aren't. When your application hits an unsupported operator, the error surfaces at runtime, not at deployment time.

Deployment Topology — How They Actually Work

MongoDB Atlas
Application
MongoDB Driver
Native BSON wire protocol
Replica Set
PrimaryWiredTiger
SecondaryWiredTiger
SecondaryWiredTiger
Oplog replication · Doc-level locks
AWS / Azure / GCP
Cosmos DB vCore
Application
Wire Protocol Adapter
Translates to internal engine
vCore Cluster
PrimaryCustom Engine
HA StandbyManaged
Azure Managed Disk
No oplog · No read preference
Azure only
Cosmos DB RU
Application
Gateway / REST
RU-metered access
ARS Storage Engine
Partition 1
≤20 GB
Partition 2
≤20 GB
Partition 3
≤20 GB
≤20 GB per logical partition
Partition key = immutable
Azure only
Amazon DocumentDB
Application
Wire Protocol Emulation
Aurora-style compute layer
Compute Instances
WriterInstance
ReaderReplica
ReaderReplica
Shared Distributed Storage
6 copies · 3 AZs · Log-structured
AWS only
MongoDB Atlas
Application
mongos Router
Query routing · Merge
Shard 1
PWT
SWT
SWT
Shard 2
PWT
SWT
SWT
+ Config Servers (auto on Atlas)
No shard limit · Online resharding
Cosmos DB vCore
Application
Managed Router
Azure-managed routing
Shard 1
vCoreCustom
HAStandby
Shard 2
vCoreCustom
HAStandby
Azure Managed Disk per shard
Max 10 physical shards
Shard key immutable at creation
Cosmos DB RU
Application
Gateway / REST
RU-metered · Fan-out queries
ARS Engine — Auto-Partitioned
Phys Part 1
Logical parts…
≤20 GB each
Phys Part 2
Logical parts…
≤20 GB each
No manual shard control
Cross-partition = fan-out
Partition key chosen once — forever
DocumentDB Elastic
Application
Elastic Cluster Router
Shard-aware routing
Shard 1
Compute
Storage
6 copies / 3 AZs
Shard 2
Compute
Storage
6 copies / 3 AZs
Shard key immutable
Separate tier from standard DocDB

Scalability — Horizontal & Vertical

How each platform grows with your workload, and where the ceilings are.

Capability MongoDB Atlas Cosmos DB vCore Cosmos DB RU DocumentDB
Horizontal Scaling Native sharding — hash, range, zone-based. Add shards on-the-fly. Auto-balancer redistributes chunks. No hard shard limit Sharding supported (M60+ tier). Max 10 physical shards. Automatic rebalancing. Shard key selection at creation Automatic partitioning by partition key. Transparent split when partition exceeds threshold. No shard management, but no control either Elastic Clusters for sharding (separate tier). Standard instances scale vertically only — no native sharding
Vertical Scaling M10 → M700 (768 GB RAM). Scale with zero downtime via rolling process Up to 128 vCores / 2 TB RAM. Burstable and general-purpose tiers. Scaling requires potential downtime N/A — throughput governed by RUs, not instance size. Autoscale between 10% and 100% of provisioned max Instance classes from db.t3.medium to db.r6g.16xlarge. Scaling requires reboot
Storage Ceiling No hard cap (sharded clusters scale to petabytes). Per-replica up to 4 TB default, extendable Up to 32 TB across cluster (configurable). 2 TB per shard for certain tiers 20 GB per logical partition. Unlimited storage overall but data modeled around this constraint 128 TB per cluster. Auto-grows in 10 GB increments
Geo-Distribution Global Clusters with zone sharding. Pin data to specific regions. Workload isolation across clouds (AWS, Azure, GCP simultaneously) Azure-region replicas for HA. No multi-region write support. No cross-cloud Multi-region writes built-in. 5 consistency levels for global reads. Azure-only Global Clusters for cross-region replication. Single-region writes. AWS-only
Auto-Scaling Atlas auto-scaling adjusts cluster tier based on CPU/memory utilization. Configurable thresholds and limits Burstable tier available. No auto-scaling of compute tier RU autoscale between 10–100% of max provisioned. Serverless mode for intermittent workloads No native auto-scaling of instances. Pair with AWS Auto Scaling for read replicas
Online Resharding Live resharding (MongoDB 5.0+). Change shard key without downtime. Automatic chunk migration Shard key changes require re-creating the collection Partition key is immutable after creation. Requires data migration to change Elastic Cluster shard key immutable after creation

The 20 GB Trap (Cosmos DB RU)

Each logical partition in Cosmos DB RU mode caps at 20 GB. If your data model concentrates writes on a single partition key value beyond 20 GB, you hit a hard wall. Rearchitecting an immutable partition key in production is one of the most painful migrations in cloud databases — it typically requires a parallel deployment and full data copy.

Platform Features

Beyond basic CRUD — the integrated capabilities that determine whether you need additional infrastructure.

Feature Atlas Cosmos vCore Cosmos RU DocumentDB
Full-Text Search ✓ Atlas Search — Lucene-based. Fuzzy, autocomplete, facets, scoring, highlights, compound queries. Integrated in aggregation pipeline via $search ◉ Basic — Simple text indexes. No fuzzy, no facets, no scoring functions No native text search. Requires external service (Azure AI Search) ◉ Text Index v2 (8.0). Basic search. No fuzzy matching, no scoring, no facets
Vector Search ✓ Atlas Vector Search — HNSW + IVF indexes, $vectorSearch stage, pre-filter support, up to 4096 dims, integrated with search pipeline ◉ Vector search — HNSW and IVF indexes, up to 16,000 dims. $search with vector queries ◉ Vector search via DiskANN-based indexes. Integrated with Cosmos DB API ◉ Vector search (8.0) — $vectorSearch stage. HNSW indexes. 30x faster parallel index builds claimed
Embedding & Reranking Models Atlas hosts embedding models (e.g., Nomic) and reranking models (e.g., Cohere) directly — vectorize at query/index time without external calls External Azure OpenAI / AI service required External Azure OpenAI / AI service required External Bedrock / SageMaker required
Time Series Collections Native time-series engine with automatic bucketing, columnar compression, windowed inserts/reads. Purpose-built storage Not supported Not supported Not supported
Change Streams Full change streams — watch database, collection, or deployment. Resume tokens. Pre- and post-images. Filters with match expressions ◉ Preview — Change streams with wallTime support (gated preview, Sep 2024). Not GA ◉ Change Feed — Different API than MongoDB change streams. No resume token interop. Pull model only ◉ Supported with differences — 24-hr retention default, no pre-image support, limited filtering
Aggregation Pipeline ✓ Full — All stages including $graphLookup, $facet, $bucket, $setWindowFields, $densify, $fill, $merge, $unionWith ◉ Most stages — Adds $setWindowFields, $merge, $fill in recent updates. No $accumulator, $function ◉ Partial — Many stages missing. Cross-partition $lookup limited. No $out/$merge across partitions ◉ Partial — Missing $facet, $bucketAuto, $setWindowFields, $graphLookup, $unionWith, $sortByCount. $merge added in 8.0
Queryable Encryption Client-side Field Level Encryption (CSFLE) + Queryable Encryption. Encrypted fields remain queryable (equality, range) without server-side decryption Azure-level encryption (TDE, CMK). No CSFLE. No encrypted query support Azure-level encryption. No CSFLE AWS-level encryption (KMS, TDE). No CSFLE. No queryable encryption
Atlas Stream Processing Continuous processing of streaming data using MongoDB aggregation syntax. Connect Kafka, Atlas triggers, and other sources
Charts & Visualization ✓ Atlas Charts — Native, embedded dashboards directly on your data. No ETL required Use Power BI or third-party Use Power BI or third-party Use QuickSight or third-party
Online Archive Automated tiering of cold data to cheaper storage. Queryable via federated queries alongside hot data

The Platform Gap

Atlas Search, Vector Search, embedded ML models, time series, stream processing, queryable encryption, and online archive are integrated features that run co-located with your data. With Cosmos DB or DocumentDB, each of these requires stitching in a separate service — Azure AI Search + Azure OpenAI + Event Hubs, or OpenSearch + Bedrock + Kinesis — adding latency, operational overhead, and data synchronization complexity.

API Compatibility & Wire Protocol

Claiming "MongoDB compatible" and being MongoDB are fundamentally different things.

Dimension Atlas Cosmos vCore Cosmos RU DocumentDB
Server Version Actual MongoDB 7.0 / 8.0 binary (current release) Wire protocol compat: 5.0, 6.0, 7.0. Not the MongoDB binary — features reimplemented Wire protocol compat: 3.6, 4.0, 4.2, 6.0. Major feature gaps at every level "Compatibility version" 5.0, 8.0. Not MongoDB — Aurora-based reimplementation
Driver Support All official MongoDB drivers fully supported and tested. Community drivers work Official drivers work with caveats. Some features (like CSFLE, sessions) may not work Official drivers work. Many advanced features return errors or behave differently Official drivers work for basic operations. See MongoDB's DocumentDB compat matrix
mongosh / Compass Full support Connects but some commands fail Connects but many admin commands unsupported Connects but explain plans and profiler differ
mongodump / mongorestore Full support. Also: Cloud Backup with PIT restore Works for data; metadata and options may not round-trip Not supported. Use Azure Data Factory or custom migration Partially works. Some index types and options lost
Transactions (multi-doc) Multi-document ACID across collections, databases, and shards. Snapshot isolation. 60s default lifetime (configurable) Single non-sharded collection only. No cross-collection. Max 30s lifetime Supported within a single logical partition only Supported across statements within a single cluster. 1-minute timeout
Indexes Compound, multikey, text, geospatial (2d, 2dsphere), hashed, wildcard, partial, TTL, unique, hidden, clustered, columnstore Compound, multikey, geospatial (GA 2024), wildcard, unique, TTL. No hashed, limited text, limited sparse Auto-indexed by default. Limited control. No compound index creation. Wildcard policy Compound, multikey, TTL, unique, sparse. No partial indexes. No geospatial (2dsphere limited). No hashed. No wildcard
Index Builds Background/foreground (hybrid builds in 4.2+). Online, no lock on collection Online index builds supported. Long-running builds may impact performance Automatic — indexes created asynchronously. May consume RUs during build Background supported. 30x faster parallel builds claimed (8.0)

Unsupported Aggregation Stages

Stages available in MongoDB 7.0+ that are absent or limited in compatibility layers:

Cosmos DB vCore — Missing

$accumulator $function $where $searchMeta

Cosmos DB RU — Missing

$facet $graphLookup $merge $out $unionWith $setWindowFields $bucket $densify

DocumentDB — Missing

$facet $graphLookup $bucketAuto $sortByCount $setWindowFields $unionWith $collStats $planCacheStats

See the Difference: Same Query, Four Platforms

MongoDB Atlas db.employees.aggregate([ { $graphLookup: { from: "employees", startWith: "$managerId", connectFromField: "managerId", connectToField: "_id", as: "reportingChain" }} ]) // ✓ Full reporting chain returned
Cosmos DB vCore db.employees.aggregate([ { $graphLookup: { ... } } ]) // ✗ MongoCommandException: // Unrecognized pipeline stage // name: '$graphLookup'
Cosmos DB RU db.employees.aggregate([ { $graphLookup: { ... } } ]) // ✗ Unsupported stage: // $graphLookup
DocumentDB db.employees.aggregate([ { $graphLookup: { ... } } ]) // ✗ Unrecognized expression: // '$graphLookup'
MongoDB Atlas db.orders.aggregate([ { $facet: { byStatus: [ { $sortByCount: "$status" }], byRegion: [ { $group: {_id: "$region", n: {$sum: 1}}}], priceRanges: [ { $bucket: {groupBy: "$total", boundaries: [0,100,500]}}] }} ]) // ✓ All 3 facets, one round-trip
Cosmos DB vCore db.orders.aggregate([ { $facet: { ... } } ]) // ✓ Works in recent versions // Some sub-pipeline stages // may have restrictions
Cosmos DB RU db.orders.aggregate([ { $facet: { ... } } ]) // ✗ Unsupported: $facet // Must split into 3 queries // 3x RU consumption
DocumentDB db.orders.aggregate([ { $facet: { ... } } ]) // ✗ Unsupported: '$facet' // 3 separate queries + // client-side merge
MongoDB Atlas db.products.aggregate([ { $vectorSearch: { index: "vec_idx", path: "embedding", queryVector: [0.12, -0.45, ...], numCandidates: 100, limit: 10, filter: {category: "electronics"} }}, { $project: { score: {$meta: "vectorSearchScore"}, name: 1, description: 1 }} ]) // ✓ Pre-filtered vector search // with relevance scoring
Cosmos DB vCore // Different API syntax: db.products.aggregate([ { $search: { "cosmosSearch": { vector: [0.12, -0.45,...], path: "embedding", k: 10 } }} ]) // ✗ Different syntax than Atlas // No hybrid text+vector // No pre-filter in same stage
Cosmos DB RU // DiskANN-based vector: db.products.aggregate([ { $search: { "cosmosSearch": { vector: [0.12, ...], path: "embedding", k: 10 } }} ]) // ✗ No text search at all // Partition-scoped only
DocumentDB db.products.aggregate([ { $vectorSearch: { queryVector: [0.12, ...], path: "embedding", limit: 10 }} ]) // ✗ Basic vector only (8.0) // No hybrid text+vector // No pre-filter, no scoring
MongoDB Atlas const session = client.startSession(); session.startTransaction(); await orders.insertOne( {orderId: 123, ...}, {session}); await inventory.updateOne( {sku: "A1"}, {$inc: {qty: -1}}, {session}); await payments.insertOne( {orderId: 123, ...}, {session}); await session.commitTransaction(); // ✓ ACID across 3 collections // Works on sharded clusters too
Cosmos DB vCore const session = client.startSession(); session.startTransaction(); await orders.insertOne(...,{session}); await inventory.updateOne( ..., {session}); // ✗ Error: transactions across // multiple collections are // not supported. // // Max lifetime: 30 seconds // Single collection only.
Cosmos DB RU const session = client.startSession(); session.startTransaction(); await orders.insertOne(...,{session}); await inventory.updateOne( ..., {session}); // ✗ Transactions limited to // single logical partition. // // Cross-partition or cross- // collection = not supported.
DocumentDB const session = client.startSession(); session.startTransaction(); await orders.insertOne(...,{session}); await inventory.updateOne( ..., {session}); await payments.insertOne( ..., {session}); await session.commitTransaction(); // ✓ Cross-collection supported // 1-minute timeout limit

The Fine Print — Real Limitations

The things you discover in production, not in proof-of-concept. Each of these has caused real migration decisions.

Azure Cosmos DB

Immutable Partition Key (RU)

Once you choose a partition key on a Cosmos DB RU container, it cannot be changed. If your access patterns evolve (and they will), the only fix is creating a new container and migrating all data. This is a fundamental architectural constraint, not a missing feature.

Cosmos DB RU

20 GB Logical Partition Ceiling (RU)

Each unique partition key value can hold a maximum of 20 GB. For workloads with hot keys — e.g., a high-volume tenant in a SaaS app — this creates a hard ceiling that can't be solved by adding throughput. Your only option is rearchitecting the partition key.

Cosmos DB RU

Cross-Partition Queries Are Fan-Out (RU)

Any query that doesn't specify the partition key fans out to every physical partition in parallel. A 5 RU query on 10 partitions becomes 50+ RUs. This makes analytics, reporting, and ad-hoc queries disproportionately expensive in throughput.

Cosmos DB RU

No Cross-Collection Transactions (vCore)

Multi-document transactions in Cosmos DB vCore are limited to a single, non-sharded collection. If your business logic requires atomic operations across orders, inventory, and payments — a common pattern — you must redesign into a single-collection schema or accept eventual consistency.

Cosmos DB vCore

Maximum 10 Physical Shards (vCore)

Horizontal scaling caps at 10 physical shards. For workloads that grow beyond what 10 shards can serve, there is no path forward without rearchitecting. MongoDB Atlas has no hard shard limit.

Cosmos DB vCore

Change Streams Still in Preview (vCore)

As of late 2024, change streams in Cosmos DB vCore are in "gated preview" — not GA. Production CDC pipelines cannot rely on a preview feature with no SLA guarantee.

Cosmos DB vCore

Amazon DocumentDB

Indexes Not Used for Common Operators

DocumentDB does not leverage indexes for queries using $ne, $nin, $nor, $not, $exists, or $distinct. These queries result in full collection scans — even with an index on the queried field. This is a fundamental engine limitation, not a configuration issue.

DocumentDB

No Client-Side Field Level Encryption

CSFLE and Queryable Encryption are MongoDB features that encrypt data before it leaves the driver. DocumentDB doesn't support them. For regulated industries (finance, healthcare) that require application-layer encryption, this is a disqualifier.

DocumentDB

Missing Aggregation Stages

Even in version 8.0, DocumentDB lacks $facet, $graphLookup, $setWindowFields, $unionWith, $bucketAuto, and $sortByCount. Applications using these stages for analytics, graph traversal, or windowed operations cannot migrate without rewriting queries.

DocumentDB

No Geospatial Query Support

$near, $geoWithin with $center/$polygon/$box, and $geoIntersects are unsupported. Location-based applications (delivery, ride-sharing, store locators) cannot run natively on DocumentDB.

DocumentDB

No Time Series, No Capped Collections

Time series collections, capped collections, and GridFS are all absent. IoT ingestion pipelines, log stores, and file-serving applications need alternative architectures or external services.

DocumentDB

"8.0 Compatible" ≠ MongoDB 8.0

DocumentDB version numbers refer to its own releases, not MongoDB releases. "8.0 compatible" means it supports some MongoDB 8.0 driver features, but many 8.0 features (compound wildcard indexes, encrypted range queries, cluster-to-cluster sync) are absent. Always test against the compatibility matrix.

DocumentDB

Interactive: What Happens When…

Click a scenario to see the real impact across all four platforms.

Shared Limitations (Both)

Silent Behavioral Differences

Because neither runs the actual MongoDB engine, edge cases in sorting, type coercion, collation, and null handling may differ silently. Your test suite passes, but production data with unexpected types or Unicode reveals divergent behavior. There is no compatibility test suite that covers all edge cases.

Both

No mongodump/mongorestore Round-Trip Guarantee

Backup and restore with native MongoDB tools may lose index options, validation rules, or collection-level settings. You're dependent on each vendor's proprietary backup mechanism, making cross-platform migration harder, not easier.

Both

Vendor Lock-In

Cosmos DB runs only on Azure. DocumentDB runs only on AWS. MongoDB Atlas runs on AWS, Azure, and GCP — often simultaneously in the same cluster. If your multi-cloud or exit strategy matters, compatibility layers lock you to a single cloud provider.

Both

Observability & Monitoring

What you can see determines what you can fix.

Capability Atlas Cosmos vCore Cosmos RU DocumentDB
Real-Time Performance Panel Live view of operations, latency, connections, opcounters. Drill into slow queries in real time Azure Monitor metrics with lag. No live operation inspector RU consumption metrics in Azure Portal. Partition-level heat maps CloudWatch metrics. Performance Insights for top queries
Query Profiler Database Profiler captures all operations above a threshold. Accessible via db.system.profile. Atlas UI auto-detects slow queries and suggests indexes $explain supported. No centralized profiler or index advisor RU charge per operation visible. Limited query plan detail Profiler supported. Explain plans work but differ from MongoDB's optimizer output
Index Recommendations Atlas Performance Advisor analyzes slow queries and recommends optimal indexes with 1-click creation Manual analysis required N/A — auto-indexed Manual analysis required
Audit Logging Granular audit filters: authentication, CRUD, DDL, admin operations. Output to Atlas, file, or syslog. JSON structured Azure diagnostic logging. Less granular than MongoDB audit Azure diagnostic logging to Log Analytics or Event Hub CloudWatch-based audit logs. DML and DDL operations. Adds latency when enabled
Alerting Built-in alert conditions: replication lag, oplog window, disk %, connections, slow queries. PagerDuty/Slack/webhook integrations Azure Monitor alerts on standard metrics Azure Monitor alerts. RU-based threshold alerts CloudWatch alarms. EventBridge for operational events
Third-Party Integration Datadog, New Relic, Prometheus/Grafana, Splunk, PagerDuty — native integrations via Atlas Admin API and MongoDB metrics endpoint Azure ecosystem only (Monitor, Log Analytics, Grafana via Azure Managed Grafana) Azure ecosystem CloudWatch → Datadog/Grafana possible but indirect

Enterprise Readiness

SLAs, compliance, high availability, and operational maturity.

Dimension Atlas Cosmos vCore Cosmos RU DocumentDB
SLA 99.995% for multi-region. 99.99% for single-region multi-node 99.99% with HA enabled 99.999% for multi-region with multi-write 99.99%
Compliance SOC 2, ISO 27001, HIPAA, PCI DSS, FedRAMP, GDPR, CSA STAR. 30+ certifications Inherits Azure compliance: SOC, ISO, HIPAA, FedRAMP, etc. Same Azure compliance portfolio Inherits AWS compliance: SOC, ISO, HIPAA, PCI, FedRAMP
Backup & Point-in-Time Restore Continuous cloud backup with PIT restore (1-second granularity). Configurable retention. Snapshot backups. Queryable backups Azure Backup with configurable retention. PIT restore at hours/daily granularity Continuous backup with PIT restore (configurable retention: 7–30 days). Per-account granularity Continuous backup with PIT restore (5-min granularity, 35-day retention). Cluster-level snapshots
HA & Failover Automatic failover within replica set (typically <10 seconds). Retryable writes/reads ensure client-side resilience. Test failover via Atlas UI HA replicas with automatic failover. Failover time varies by tier Automatic failover across regions. Switchover transparent with session consistency Automatic failover across read replicas. Typically 30–60 seconds. No test-failover command
Multi-Region & Active-Active Global Clusters with zone-based sharding. Pin data to regions for sovereignty. Active-active writes across regions via zone sharding — each region owns a slice of the keyspace and serves local reads/writes with single-digit-ms latency. Bi-directional replication across zones. Workload isolation by region or cloud Zone-redundant HA within a single Azure region. Cross-region read replicas possible. No multi-region writes. No active-active. Failover to secondary region is manual Multi-region writes with 5 consistency levels. Closest to active-active, but with conflict resolution (last-writer-wins by default). No zone-based data pinning — all regions hold all data Global Clusters for cross-region read replication. Single-region writes only. Readers in other regions serve eventually consistent reads. No active-active. Failover promotes a secondary region
Multi-Cloud AWS, Azure, GCP in same cluster. 125+ regions. Workload isolation by cloud Azure-only Azure-only AWS-only
Encryption At rest (AES-256), in transit (TLS), CSFLE (driver-level), Queryable Encryption, KMIP and AWS/Azure/GCP KMS integration At rest (Azure SSE, CMK), in transit (TLS). No CSFLE At rest (Azure SSE, CMK), in transit (TLS). No CSFLE At rest (AWS KMS), in transit (TLS). No CSFLE
Developer Experience Full mongosh, Compass, VS Code extension. MongoDB University (free). 60K+ Stack Overflow questions. Native aggregation builder in Atlas UI. Data Explorer with in-browser CRUD mongosh connects with caveats. Azure Data Studio integration. Smaller community. Some admin commands return errors in Compass mongosh connects. Azure Portal UI for collection browsing. Limited community resources specific to RU+MongoDB API. Must learn RU capacity model mongosh connects. AWS Cloud9 / CLI integration. Community resources often outdated. Explain plans differ from MongoDB — debugging requires learning DocumentDB-specific behavior
Migration Tooling mongodump/mongorestore, mongomirror for live migration, Atlas Live Migrate (zero-downtime, built into Atlas UI), Cluster-to-Cluster Sync (C2C), and Relational Migrator — translates relational schemas (Oracle, SQL Server, Postgres, MySQL) to document models, generates migration code, and supports continuous sync Azure Database Migration Service for migration to vCore. mongodump/mongorestore partially works. No live migration equivalent Azure Data Factory, custom scripts. No mongodump support. No live migration tool AWS DMS for migration. mongodump partially works (some index options lost). No zero-downtime live migration built in
Network Isolation VPC Peering, Private Link (AWS/Azure/GCP), IP Access List, LDAP, SCIM Azure Private Link, VNet integration Azure Private Link, VNet integration, IP firewall VPC only. PrivateLink supported. No public endpoint option

Capability Comparison

Relative capability across key dimensions (higher = stronger)

Feature Breadth & Depth

Platform Integration Score

Key Takeaways

Atlas = The Real Engine

Atlas runs the actual MongoDB binary — every feature, every operator, every release. Compatibility layers are approximations that lag behind and fail silently at the edges.

Partition Key Lock-In

Cosmos DB RU's immutable partition key + 20 GB ceiling creates a design-time decision with irreversible production consequences. Atlas's resharding is online and non-destructive.

🔎

Integrated Search & AI

Atlas Search, Vector Search, and hosted embedding models eliminate the need for separate search infrastructure. No sync pipelines, no eventual consistency between stores.

🔒

Cloud Lock-In

Cosmos DB = Azure only. DocumentDB = AWS only. Atlas = AWS + Azure + GCP, simultaneously in a single cluster with zone-based data pinning.

Which Platform Fits Your Workload?

Select your workload type to see a fit assessment across all platforms.

Migration Risk: What If You Need to Leave?

Starting on a compatibility layer and migrating to real MongoDB later isn't free. Here's what it takes.

Cosmos DB vCore → Atlas

Medium Risk

mongodump/mongorestore works for basic data. Index options and validation rules may not transfer cleanly. Biggest risk: application code that worked around vCore limitations (single-collection transactions, missing stages) may need refactoring. Change streams that relied on preview behavior need revalidation.

Cosmos DB RU → Atlas

High Risk

No mongodump/mongorestore. Data modeled around 20 GB partition limits and RU-optimized access patterns (partition key in every query) likely doesn't match MongoDB's sharding model. Application code is deeply coupled to RU-specific patterns. Change feed ≠ change streams. Essentially a rewrite of the data layer.

DocumentDB → Atlas

Medium Risk

mongodump/mongorestore partially works. Queries that avoided unsupported operators ($facet, $graphLookup, geospatial) may have been rewritten as application logic — that code can often be simplified back to native MongoDB. Biggest win: features like Atlas Search, Vector Search, and time series become available without external services.

The asymmetry

Migrating from MongoDB Atlas to any other platform requires giving up features. Migrating to MongoDB Atlas from a compatibility layer means gaining features and removing workarounds. The migration risk is fundamentally asymmetric — starting on Atlas keeps all doors open.

The Bottom Line

One card. Four verdicts. Share this.

MongoDB Atlas

The real MongoDB. Every feature, every operator, every release. Multi-cloud, active-active, integrated search and AI. Zero compatibility risk. The platform your application will never outgrow.

Cosmos DB vCore

The closest emulation. Reasonable for Azure-locked teams with simple CRUD workloads. Falls short on transactions, search, and the aggregation long tail. 10-shard ceiling limits growth.

Cosmos DB RU

Strong for global distribution with simple key-value patterns. The 20 GB partition ceiling, immutable partition key, and vast feature gaps make it a poor fit for MongoDB workloads that use the query language.

Amazon DocumentDB

An AWS convenience play. Familiar wire protocol, but missing aggregation stages, geospatial, CSFLE, and time series. Index blind spots ($ne/$nin) create silent performance cliffs. Acceptable for basic CRUD; risky for anything beyond.

If you're building on MongoDB's query language, build on MongoDB's engine.
Everything else is an approximation.

References & Further Reading