How to Merge JSON Files: Free Online Tool + Python + jq (2026)

Paginated APIs return data across 50 separate JSON files. Microservices generate individual configuration files per environment. Database exports split into chunks to comply with file size limits. Eventually, all these fragments need combining into single, usable datasets.
Four distinct approaches exist for merging JSON files, each optimized for different technical environments and skill levels. Browser-based tools require zero setup and handle most routine tasks. Python scripts automate repetitive merges and implement custom business logic. JavaScript solutions integrate naturally into Node.js and web application workflows. Command-line utilities like jq execute merges in single-line commands within shell scripts.
The challenge lies not in simple concatenation, but in handling real-world complexity. Source files rarely share identical structures. Nested objects require recursive merging rather than shallow key replacement. Arrays need deduplication when multiple sources contain overlapping records. Some workflows demand key-based merging where records combine by matching ID fields.
Testing focused on scenarios where typical solutions fail. Configuration files with conflicting nested keys that need intelligent merging. API exports with duplicate records requiring deduplication logic. Multi-gigabyte datasets where loading everything into memory causes failures. Files with inconsistent schemas where type checking prevents runtime errors.
Each method section includes complete working code, handles edge cases explicitly, and explains performance implications at scale.
Common JSON Merge Scenarios
Data Consolidation from Multiple Sources
Export data from multiple APIs, services, or tools often needs to be combined into a single file for analysis or import. This is common in third party integrations where each system exports its own JSON payloads.
Reassembling Split Files
Large JSON files that were split for processing, transfer, or storage eventually need reconstruction. Any workflow with file size limits, such as exports, uploads, or backup tooling, runs into this regularly.
Preparing Data for Analysis or Migration
Most analysis tools expect a single input file, not a folder of fragments. When data is scattered across multiple JSON files, merging is the required step before running queries, generating reports, or importing into databases.
Bundling Configuration Files
Microservice architectures maintain separate configuration files for different environments or components. Merging into unified configuration reduces deployment complexity and makes changes easier to review.
Simplifying Parsing and File Management
Single file operations reduce I/O overhead, simplify error handling, and decrease script complexity. It also becomes easier to validate results because there is one output file to sanity check and archive.
Combining Test Datasets
Test fixtures across different test suites are often merged into master datasets for more complete testing. This avoids duplicated setup work and keeps test data consistent across multiple environments.
Log Aggregation
Application logs exported as JSON from different time periods or services often need to be combined for complete analysis. A single merged dataset makes it possible to search, group, and deduplicate events without hopping between files.
JSON Merge Strategies by Structure Type
The correct merge approach depends on data structure.
Array Concatenation (Most Common)
Each file contains a JSON array, and the merge simply produces one unified array. Example: file1.json has [A, B] and file2.json has [C, D], resulting in [A, B, C, D].
Object Merging
When each file contains a JSON object, merging means combining keys into one output object. Example: file1.json has {"name": "Alice"} and file2.json has {"age": 30}, resulting in {"name": "Alice", "age": 30}. When keys overlap, decide precedence, and in most workflows later files override earlier ones.
Deep Merging
Deep merging recursively combines nested objects instead of overwriting the entire parent key. If both files have a settings key with different sub keys, a deep merge preserves sub keys from both sources rather than replacing the whole settings object.
Key-Based Merging
Key based merging combines arrays of objects by matching on a specific key like id. Records with identical IDs merge together, and unique records append to the result. Strategy selection determines which tool and implementation to use.
Method 1: Browser-Based JSON Merge Tool (No Code Required)
For quick merges without writing code, browser-based tools provide the fastest solution.
Try it here: merge-json-files.com
Workflow:
- Navigate to JSON Merger Tool
- Upload two or more JSON files
- Select a merge strategy (for example, merge arrays together or combine objects by key)
- Preview the result to verify correctness
- Download the merged JSON file
Privacy and features: The tool processes files entirely client side in the browser, so data never uploads to any server. It supports deeply nested JSON structures, auto validates malformed input with specific error messages, and offers optional pretty print formatting for readable output.
Use case: Quick merging of a few files without terminal access or scripting. Particularly valuable for non technical team members handling data exports.
Method 2: Merge JSON Files Using Python
Python offers the most flexibility for JSON merging when automation, custom logic, or large datasets are involved.
Multiple approaches cover different scenarios below.
Basic Array Merge
Simplest case: each file contains a JSON array, combined into one array.
import json
import glob
# Load all JSON files in folder
json_files = glob.glob("./data/*.json")
merged_data = []
for file in json_files:
with open(file) as f:
data = json.load(f)
merged_data.extend(data) # Assumes each file contains a JSON array
# Save merged file
with open("merged.json", "w") as f:
json.dump(merged_data, f, indent=4)
print(f"Merged {len(json_files)} files, {len(merged_data)} total records")
Technical note: Use extend for arrays (adds individual elements) rather than append (nests arrays inside arrays). Glob patterns handle multiple files automatically.
Merging Object-Based JSON Files
When each file represents a JSON object rather than an array:
import json
import glob
merged = {}
for filepath in sorted(glob.glob("./configs/*.json")):
with open(filepath) as f:
data = json.load(f)
merged.update(data) # Later files override earlier ones for duplicate keys
with open("merged_config.json", "w") as f:
json.dump(merged, f, indent=2)
Important: update performs a shallow merge. For identical keys, the last file wins. Nested objects require a deep merge implementation instead.
Deep Merge for Nested Objects
When your JSON files have nested structures and you want to preserve data at all levels:
import json
import glob
import copy
def deep_merge(base, override):
"""Recursively merge override into base."""
result = copy.deepcopy(base)
for key, value in override.items():
if key in result and isinstance(result[key], dict) and isinstance(value, dict):
result[key] = deep_merge(result[key], value)
elif key in result and isinstance(result[key], list) and isinstance(value, list):
result[key] = result[key] + value
else:
result[key] = copy.deepcopy(value)
return result
merged = {}
for filepath in sorted(glob.glob("./data/*.json")):
with open(filepath) as f:
data = json.load(f)
merged = deep_merge(merged, data)
with open("deep_merged.json", "w") as f:
json.dump(merged, f, indent=2)
This handles cases where both files have "settings": {"theme": "dark"} and "settings": {"language": "en"}.
Instead of one overwriting the other, the result becomes "settings": {"theme": "dark", "language": "en"}.
Merging with Deduplication
Combining arrays potentially containing duplicate records:
import json
import glob
merged_data = []
seen_ids = set()
for filepath in sorted(glob.glob("./exports/*.json")):
with open(filepath) as f:
data = json.load(f)
for record in data:
record_id = record.get('id')
if record_id and record_id in seen_ids:
continue # Skip duplicate
if record_id:
seen_ids.add(record_id)
merged_data.append(record)
with open("merged_unique.json", "w") as f:
json.dump(merged_data, f, indent=2)
print(f"Merged to {len(merged_data)} unique records (skipped {len(seen_ids)} duplicates)")
Handling Large Files Efficiently
For very large JSON files where memory consumption becomes critical:
import json
import glob
with open("merged.json", "w") as out:
out.write("[\n")
first = True
for filepath in sorted(glob.glob("./data/*.json")):
with open(filepath) as f:
data = json.load(f)
for record in data:
if not first:
out.write(",\n")
json.dump(record, out)
first = False
out.write("\n]")
This writes records directly to the output file instead of building the entire merged array in memory.
Python advantages: Fully automated merging for unlimited files, support for complex logic like deduplication and validation, and graceful edge case handling with try or except blocks.
Considerations: Ensure consistent file structure or handle inconsistencies in code. Monitor memory usage with very large datasets.
Method 3: Merge JSON with JavaScript (Web and Node.js Environments)
JavaScript environments provide native JSON handling capabilities.
Browser example (handling uploaded files)
<input type="file" multiple id="jsonFiles" />
<script>
document.getElementById("jsonFiles").addEventListener("change", async (e) => {
let files = e.target.files;
let merged = [];
for (let file of files) {
let text = await file.text();
let json = JSON.parse(text);
merged = merged.concat(json);
}
console.log(JSON.stringify(merged, null, 2));
});
</script>
Node.js version
const fs = require("fs");
const path = require("path");
const dataDir = "./data";
let merged = [];
const files = fs
.readdirSync(dataDir)
.filter((f) => f.endsWith(".json"))
.sort();
for (const file of files) {
const filepath = path.join(dataDir, file);
const data = JSON.parse(fs.readFileSync(filepath, "utf8"));
if (Array.isArray(data)) {
merged = merged.concat(data);
} else {
merged.push(data);
}
console.log(<span class="inline-code">Read ${file}: ${Array.isArray(data) ? data.length : 1} records</span>);
}
fs.writeFileSync("merged.json", JSON.stringify(merged, null, 2));
console.log(<span class="inline-code">\nMerged ${files.length} files, ${merged.length} total records</span>);
Deep merge in Node.js
For merging configuration objects with nested structures:
const fs = require("fs");
function deepMerge(target, source) {
for (const key of Object.keys(source)) {
if (
source[key] &&
typeof source[key] === "object" &&
!Array.isArray(source[key])
) {
if (!target[key]) target[key] = {};
deepMerge(target[key], source[key]);
} else if (Array.isArray(source[key]) && Array.isArray(target[key])) {
target[key] = target[key].concat(source[key]);
} else {
target[key] = source[key];
}
}
return target;
}
const files = ["base.json", "overrides.json", "local.json"];
let config = {};
for (const file of files) {
const data = JSON.parse(fs.readFileSync(file, "utf8"));
config = deepMerge(config, data);
}
fs.writeFileSync("merged_config.json", JSON.stringify(config, null, 2));
JavaScript works naturally in web or Node.js environments.
The browser approach suits client-side tools, while Node.js handles file system operations for server-side automation.
Method 4: Command-Line Approach with jq (Cross-Platform)
Terminal users benefit from jq for rapid JSON processing.
Install jq
sudo apt install jq # Debian/Ubuntu
brew install jq # macOS
choco install jq # Windows (via Chocolatey)
Merge array based files
jq -s add file1.json file2.json file3.json > merged.json
The -s flag slurps multiple inputs into an array. The add function concatenates arrays.
Single command execution.
Merge all JSON files in a directory
jq -s add ./data/*.json > merged.json
Merge object based files (shallow)
jq -s 'reduce .[] as $item ({}; . * $item)' file*.json > merged.json
The * operator merges objects, with later files overriding earlier ones for duplicate keys.
Merge with key based deduplication
jq -s 'add | unique_by(.id)' file1.json file2.json > merged.json
Pretty print the output
jq -s 'add' *.json | jq '.' > merged_pretty.json
jq delivers exceptional performance, scriptability, and integration with DevOps workflows.
The syntax requires learning, but the commands above cover most production use cases.
Handling Nested and Inconsistent JSON Structures
Real-world JSON files rarely have identical structures.
Handling requires defensive programming.
Merging arrays nested inside objects
import json
import glob
merged = {"users": []}
for filepath in sorted(glob.glob("./data/*.json")):
with open(filepath) as f:
data = json.load(f)
merged["users"].extend(data.get("users", []))
with open("merged_users.json", "w") as f:
json.dump(merged, f, indent=2)
The .get("users", []) call safely handles files missing a users key, returning an empty list instead of throwing errors.
Handling files with mixed structures
import json
import glob
merged_arrays = []
merged_objects = {}
for filepath in sorted(glob.glob("./data/*.json")):
with open(filepath) as f:
data = json.load(f)
if isinstance(data, list):
merged_arrays.extend(data)
elif isinstance(data, dict):
merged_objects.update(data)
# Decide how to combine based on what you found
if merged_arrays and not merged_objects:
result = merged_arrays
elif merged_objects and not merged_arrays:
result = merged_objects
else:
result = {"records": merged_arrays, **merged_objects}
with open("merged.json", "w") as f:
json.dump(result, f, indent=2)
The core principle: never assume identical file structures.
Use type checking and .get() with defaults to handle variations gracefully.
Production Merge Method Comparison
| Method | Best For | Skill Level | Batch Support | Deep Merge | Deduplication |
|---|---|---|---|---|---|
| Online Tool | Quick tasks | Beginner | Yes, via upload | Basic | No |
| Python Script | Automation, large sets | Intermediate | Yes, unlimited | Yes | Yes |
| JavaScript | Web or Node.js devs | Intermediate | Yes | Yes | Yes |
| jq CLI | DevOps/Linux | Advanced | Yes, wildcard | Shallow | Yes |
Production Use Cases
Combine Paginated API Results
APIs returning data across multiple pages generate separate JSON files per page.
Merge produces single datasets for import or analysis.
Merge Configuration Files for Applications
Base config, environment-specific overrides, and local developer settings deep-merge into one final configuration object.
Prepare Datasets for ML Model Training
Training data from multiple sources or annotation batches requires consolidation.
Merging into single JSONL or JSON arrays serves as common preprocessing step.
Unify Split Files from Backup Systems
Data pipelines produce chunked output.
Merging chunks services downstream consumers expecting single input.
For the opposite operation, check out the guide on how to split JSON files.
Aggregate Application Logs
JSON-formatted logs from different servers or time periods combine for analysis in Elasticsearch, Splunk, or command-line tools.
Consolidate Test Fixtures
Integration tests against complete datasets benefit from merged fixtures across different test suites.
Best Practices for Production JSON Merging
Validate Input Files Before Merging
Single malformed file breaks entire merge process.
Run inputs through JSON validator (jsonlint.com) before attempting combination.
Use Version Control for Configuration Merges
Track configuration files or time-sensitive data in Git.
Enables easy change tracking and reversion.
Back Up Source Files
Preserve original files until verifying merged output correctness.
Simple precaution preventing hours of rework.
Ensure Consistent Structure
Decide up front how to handle structure mismatches.
Avoid discovering issues after processing thousands of files.
Handle Duplicate Fields Deliberately
When files share keys with different values, make conscious precedence decisions rather than relying on implicit behavior.
Validate Output
Verify merged result as valid JSON with expected record count.
Quick sanity check catches most issues.
Format Appropriately for Context
Use indent=2 for human-readable development files.
Compact JSON saves space and parses faster for production/storage.
Choosing the Right Merge Method
Merging JSON files ranges from simple for basic cases to nuanced for real-world data.
Approach selection depends on specific requirements:
If you want quick, one off merges, the online JSON merge tool handles tasks in seconds with no code and no installation. Data processing occurs entirely in browser, and you can preview results before downloading.
If you need automation and custom logic, Python provides the most flexibility. It handles deduplication, deep merging, validation, and special requirements with relatively little code.
If you are already in a web or Node.js project, JavaScript keeps everything in one language, whether you merge files client side or server side.
If you prefer command line workflows, jq delivers speed and conciseness. A single jq -s add *.json > merged.json covers many common use cases.
Regardless of method choice, always validate input, back up originals, and verify output before deployment.
Try the JSON Merge Tool for free, instant browser-based merging.
Related tools: For different data formats, the JSON to Excel converter extends workflow capabilities.
Frequently Asked Questions
What's the difference between shallow merge and deep merge in JSON?
Shallow merge combines objects at the top level only. If both files have a settings key, the entire settings object from the second file replaces the first. Deep merge recursively combines nested objects, preserving sub-keys from both sources. Use shallow merge for simple configs, deep merge when nested structures need preservation.
How do I merge JSON files without duplicate keys?
For arrays, use deduplication logic that tracks seen IDs in a set and skips records already processed. For objects, decide precedence (last file wins is common). In Python, use seen_ids.add(record['id']) before appending. In jq, use unique_by(.id) after merging arrays.
Can I merge JSON files with different structures?
Yes, but you need defensive code. Use type checking (isinstance(data, list)) to handle mixed arrays and objects. Use .get() with defaults for missing keys. Decide upfront whether to combine into separate sections or normalize structures before merging. The Python examples above show both approaches.
What's the fastest way to merge JSON files on the command line?
Use jq: jq -s add *.json > merged.json for arrays, or jq -s 'reduce .[] as $item ({}; . * $item)' *.json > merged.json for objects. Single command execution, extremely fast, works on Linux and macOS. Install with brew install jq or apt install jq.
How do I merge more than 10 JSON files at once in Python?
Use glob.glob() to match all files: for filepath in glob.glob('./data/*.json'). This handles unlimited files automatically. For very large datasets, write records directly to output file instead of building merged array in memory. The streaming example in Method 2 shows this approach.
Read More
All Articles
How to Split JSON Files: Free Tool + Python + Command Line (2026)
Split large JSON files into smaller chunks using free online tool, Python scripts, or jq command line. Complete guide with code examples for handling big datasets and nested JSON structures.

How to Add Image in JSON: URL, Base64 & File Path Methods (2026)
Add images to JSON using URLs, base64 encoding, or file paths. Complete guide with code examples, size optimization tips, and best practices for each method in web and mobile apps.

How to Create JSON File in Java: org.json, Gson & Jackson (2026)
Create JSON files in Java using org.json, Gson, or Jackson libraries. Complete guide with code examples for JSON creation, file writing, nested objects, and best practices for Java developers.