Merge remote-tracking branch 'origin' into helia

This commit is contained in:
Hayden Young 2023-12-02 21:48:09 +00:00
commit 82591bf456
12 changed files with 633 additions and 392 deletions

2
.github/FUNDING.yml vendored
View File

@ -9,4 +9,4 @@ community_bridge: # Replace with a single Community Bridge project-name e.g., cl
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
custom: ["https://github.com/orbitdb/funding"]

View File

@ -1,332 +1,5 @@
# Changelog
Note: OrbitDB follows [semver](https://semver.org/). We are currently in alpha: backwards-incompatible changes may occur in minor releases.
For now, please refer to our Git commit history for a list of changes.
## v0.29.0
### ESM
In this release we've updated OrbitDB and all of its modules to use [ESM](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules), JavaScript Modules.
### Latest js-ipfs
This release also brings compatibility with the latest version of js-ipfs (v0.66.0).
### Level v8.0.0
In addition, we've updated the underlying data storage, [Level](https://github.com/Level), to its latest version v8.0.0. We've dropped the automatic migration from Level < v5.0.0 stores, so if you have databases that use and old version of Level, please see the [Upgrading Guide](https://github.com/Level/level/blob/master/UPGRADING.md) for Level on how to migrate manually.
### Misc
We've also fixed various bugs across the code base, updated all dependencies to their latest versions and removed the ES5 distribution builds.
Note that there are no API changes to OrbitDB in this release.
For more details, please see the relevant main PRs:
- https://github.com/orbitdb/orbit-db/pull/1044
- https://github.com/orbitdb/orbit-db/pull/1050
- https://github.com/orbitdb/orbit-db-storage-adapter/pull/24
- https://github.com/search?q=org%3Aorbitdb+esm&type=issues
## v0.24.1
### JS-IPFS 0.44 Support
Until now the newest versions of js-ipfs were not supported. This was primarly because of js-ipfs api's move to async iteration starting in version 0.41. Now js-ipfs versions 0.41-0.44 are supported.
Relevant PRs:
- https://github.com/orbitdb/orbit-db-store/pull/86
- https://github.com/orbitdb/orbit-db/pull/782
### Store Operation Queue
All included stores (and any store extending orbit-db-store v3.3.0+) now queues operations. Any update sent to a store is executed sequentially and the store's close method now awaits for the queue to empty.
Relevant PRs:
- https://github.com/orbitdb/orbit-db-store/pull/85
- https://github.com/orbitdb/orbit-db-store/pull/91
### Docstore putAll Operation
A new method was added to the docstore named 'putAll'. This method allows for multiple keys to be set in the docstore in one oplog entry. This comes with some significant [performance benefits](https://gist.github.com/phillmac/155ed1eb232e75fda4a793e7672460fd). Something to note is any nodes running an older version of the docstore will ignore any changes made by putAll operations.
Relevant PRs:
- https://github.com/orbitdb/orbit-db-docstore/pull/36
### Oplog Events
It is now possible to listen for specific store operations as they are added to the store. To learn more about how to use this you can review the [documentation](https://github.com/orbitdb/orbit-db-store#events) and look at event `log.op.${operation}`.
Relevant PRs:
- https://github.com/orbitdb/orbit-db-store/pull/87
### orbit-db tests
Tests now use [orbit-db-test-utils](https://github.com/orbitdb/orbit-db-test-utils) package to deduplicate test utilities. It had already been used in most subpackages like [orbit-db-store](https://github.com/orbitdb/orbit-db-store) but now it's used in the [orbit-db](https://github.com/orbitdb/orbit-db) repo!
Relevant PRs:
- https://github.com/orbitdb/orbit-db-test-utils/pull/11
- https://github.com/orbitdb/orbit-db/pull/794
### orbit-db-store sync method
A method on orbit-db-store named sync is now an async method and only resolves after oplog heads have been added.
Relevant PRs:
- https://github.com/orbitdb/orbit-db-store/pull/38
- https://github.com/orbitdb/orbit-db-store/pull/84
### Electron Renderer FS Shim
Fixes a bug related to the native filesystem when used in electron.
Relevant PRs:
- https://github.com/orbitdb/orbit-db/pull/783
- https://github.com/orbitdb/orbit-db/pull/795
### Re-exports
Now import AccessControllers, Identities, and/or Keystore modules from OrbitDB with object destructuring like so:
`const { AccessControllers, Identities, Keystore } = require('orbit-db')`
Relevant PRs:
- https://github.com/orbitdb/orbit-db/pull/785
## v0.23.0
### Performance Improvements
Performance improvements have been made to both writing and loading.
Our benchmarks show an increase of 2-3x loading and writing speeds! :tada:
```
v0.22.0
Starting IPFS...
DB entries: 1000
Writing DB...
writing took 3586 ms
Loading DB...
load took 1777 ms
v0.23.0
Starting IPFS...
DB entries: 1000
Writing DB...
writing took 1434 ms
Loading DB...
load took 802 ms
// Writing improved: ~2.5x
// Loading improved: ~2.2x
```
The speed-up between the versions is more pronounced as the size of the database increases:
```
v0.22.0
Starting IPFS...
DB entries: 10000
Writing DB...
writing took 31292 ms
Loading DB...
load took 26812 ms
v0.23.0
Starting IPFS...
DB entries: 10000
Writing DB...
writing took 10383 ms
Loading DB...
load took 5542 ms
// Writing improved: ~3x
// Loading improved: ~4.8x
```
To try out the benchmarks for yourself run `node benchmarks/benchmark-load.js`
### Entry references
Each entry added now contains references to previous entries in powers of 2 distance apart up to a maximum distance of `referenceCount` (default 32) from it, speeding up both writing and loading and resulting in smaller entry sizes. [#275](https://github.com/orbitdb/ipfs-log/pull/275)
### Signature Caching
The default keystore and identity-provider now have caches added to speed up verification of entry signtures and identities. See [#53](https://github.com/orbitdb/orbit-db-identity-provider/pull/53) and [#38](https://github.com/orbitdb/orbit-db-keystore/pull/38)
### Offline mode
An optional `offline` flag has bee added which, when set to `true`, prevents pubsub from starting and messages from being exchanged. This is useful to speed up testing and for when you would like to use your database locally without networking enabled.
To use offline mode, start your IPFS nodes offline (with `new IPFS({ start: false })`) and create your OrbitDB instance as follows:
```js
const orbitdb = await OrbitDB.createInstance(ipfs, { offline: true, id: 'mylocalid' })
```
Note that an `id` will need to be passed if your IPFS node is offline. If you would like to start replicating databases after starting OrbitDB in offline mode, the OrbitDB instance needs to be re-created. See [#726](https://github.com/orbitdb/orbit-db/pull/726)
### Pinning and Garbage Collection
OrbitDB does **not** automatically pin content added to IPFS. This means that if garbage collection is triggered, any unpinned content will be erased. An optional `pin` flag has been added which, when set to `true`, will pin the content to IPFS and can be set as follows:
```js
await db.put('name', 'hello', { pin: true })
```
Note that this is currently _experimental_ and will degrade performance. For more info see [this issue](https://github.com/ipfs/js-ipfs/issues/2650).
It is recommended that you collect the hashes of the entries and pin them outside of the `db.put/add` calls before triggering garbage collection.
## v0.22.1
- Thanks to [#712](https://github.com/orbitdb/orbit-db/pull/712) from @kolessios, as well as the efforts of @BartKnucle and @durac :heart:, OrbitDB now works on Windows :tada: We invite our Windows friends to try it out!
- Several submodules are now exposed in the OrbitDB class ([#717](https://github.com/orbitdb/orbit-db/pull/717), thanks @hazae41)
## v0.22.0
Up to 10x Performance Increase in Appends :tada:
- `sortFn` now at the top level
- `orbit-db-storage-adapter` now provides cache and keystore interop for mongo, redis, and any `abstract-leveldown`
### (semi-)Breaking Changes
To improve performance, this release changes the way caches are managed.
#### Cache Directory Locations
_Your cache directory structure will change_. There is a migration script that will run upon creating the database.
Old Structure (node.js default):
```
orbitdb/[OrbitDB ID]/keystore
orbitdb/[DB ID]/db-name/
orbitdb/[DB ID]/db-name1/
orbitdb/[DB ID]/db-name2/
```
New Structure (node.js default):
```
orbitdb/[OrbitDB ID]/keystore
orbitdb/[OrbitDB ID]/cache
```
##### `identityKeysPath` is optional, but important!
Read more about what this release includes [here](https://orbitdb.org/orbitdb-release-v0.22).
## v0.20.0
***This release contains API breaking changes!*** The release **IS** backwards compatible with respect to old OrbitDB addresses and will be able to process and read old data-structures. The shape of the `Entry` object has also changed to include an [`identity`](https://github.com/orbitdb/orbit-db-identity-provider) field as well as increment the version `v` to 1. The `hash` property now holds a [CIDv1](https://github.com/multiformats/cid#cidv1) multihash string.
API breaking changes:
- ### Constructor:
The OrbitDB constructor requires an instance of `identity` to be passed as an argument:
```javascript
const orbitdb = new OrbitDB(ipfs, identity, [options])
```
- ### Creating an OrbitDB instance:
The preferred method for creating an instance of OrbitDB is the async `createInstance` method which will create an `identity` for you if one is not passed in the options.
```javascript
const orbitdb = await OrbitDB.createInstance(ipfs, [options])
```
- ### OrbitDB key
The `key` property has been removed and replaced with `identity`. You can access the public-key with:
```javascript
orbitdb.identity.publicKey
```
Read further and see the [API documentation](https://github.com/orbitdb/orbit-db/blob/master/API.md), [examples](https://github.com/orbitdb/orbit-db/tree/master/examples) and [Getting Started Guide](https://github.com/orbitdb/orbit-db/blob/master/GUIDE.md) to learn more about the changes. Note that we don't use semver for the npm module, so be sure to lock your orbit-db dependency to the previous release if you don't want to upgrade.
### Improved Write Permissions
OrbitDB now supports dynamically granting write-access to keys after database-creation. Previous releases required the database address to change if the write-access keys were changed. We've added an [AccessController module](https://github.com/orbitdb/orbit-db-access-controllers) which allows custom-logic access-controllers to be added to OrbitDB. Two examples of how to create and add new access-controllers can be found in the repo. An [ethereum-based access-controller](https://github.com/orbitdb/orbit-db-access-controllers/blob/master/src/contract-access-controller.js) which uses a smart-contract to determine access-rights and an [OrbitDB Access Controller](https://github.com/orbitdb/orbit-db-access-controllers/blob/master/src/orbitdb-access-controller.js) which uses another OrbitDB store to maintain access-rights. For more information, see: [Access Control](https://github.com/orbitdb/orbit-db/blob/master/GUIDE.md#access-control).
### Identity Support
We've added orbit-db-identity-provider which allows users to link external identities, such as an Ethereum account, with their OrbitDB identity. For more information, see [orbit-db-identity-provider](https://github.com/orbitdb/orbit-db-identity-provider).
### Keystore fix
A bug in orbit-db-keystore in which messages larger than 32-bytes signed by the same key produced the same signature has been fixed.
### js-ipfs v0.34.x support
OrbitDB now uses the ipfs-dag API and thus supports the latest js-ipfs again. :tada:
### go-ipfs support
With this release, we finally bring back the support for using OrbitDB with go-ipfs (through js-ipfs-http-client). You can now use OrbitDB again with a running IPFS daemon and without starting an in-process js-ipfs node.
To make OrbitDB work again with go-ipfs, we refactored some parts of the messaging and created two new modules to do that: ipfs-pubsub-peer-monitor and ipfs-pubsub-1on1. They're both modules on top of IPFS Pubsub and are used to handle the automatic message exchange upon peers connecting.
As this is the first release with support for go-ipfs in a long time, please report any problems you may experience!
### Improved test suite and documentation
We've improved the documents by adding details, fixing errors and clarifying them.
We also improved the tests a lot since the previous release. We now run the tests with js-ipfs-http-client (go-ipfs) in addition to running them with js-ipfs (Node.js). We've also cleaned up and refactored the boilerplate code for tests, improved the speed of the test run and generally polished the test suite for better readability and maintainability.
### Custom stores
OrbitDB can now use a custom store through addDatabaseType(), see more [here](https://github.com/orbitdb/orbit-db/blob/master/API.md#orbitdbadddatabasetypetype-store) and [here](https://github.com/orbitdb/orbit-db-store#creating-custom-data-stores).
### Important fixes
The previous release brought in LevelDB as the storage backend for Node.js and browser and there were some rough edges. We've fixed a bunch a problems related to using LevelDB and it should all work now.
Last, we further improved the replication code and its performance at places.
## v0.19.0
This release bring a bunch of fixes and updates improving performance, stability and browser compatibility. As always, we highly encourage to update to the latest version and report any problems in https://github.com/orbitdb/orbit-db/issues.
A big thank you to all the contributors who helped and contributed to this release! <3
### Replication
The previous release included a refactored version of the replication code and we've improved it even further in this release in terms of performance as well as stability. We're now seeing *huge* improvement in replication speed, especially when replicating a database from scratch.
To observe these improvements, run the [browser examples](https://github.com/orbitdb/orbit-db/tree/master/examples/browser) with two (different) browser tabs and replicate a database with > 100 or so entries from tab to another.
### Browser compatibility
We had some problems with browsers due to the way we used native modules. This is now fixed and OrbitDB should work in the browsers just the same as in Node.js.
### LevelDB
We've switched from using filesystem-based local cache to using LevelDB as the local storage. [Leveldown](https://www.npmjs.com/package/leveldown/) is used when run in Node.js and [level-js](https://www.npmjs.com/package/level-js) is used for browsers.
### General Updates
We put some work into the [CRDTs library](https://github.com/orbitdb/crdts) we're using and have updated OrbitDB to use the latest version. We've added more tests and improved the test suite code and tests now run faster than they did previously.
### Performance
With all the updates and fixes, we're now seeing much better performance for replicating databases between peers as well as for write throughput. See [benchmarks](https://github.com/orbitdb/orbit-db/tree/master/benchmarks) if you're interested to try it out yourself.
## v0.18.0
This release is a major one as we've added new features, fixed many of the old problems, improved the performance and code base and overhauled the documentation. OrbitDB is now more robust, easier to use, faster and comes with much awaited write-permissions feature.
***This release contains API breaking changes with no backward compatibility!*** Read further and see the [API documentation](https://github.com/orbitdb/orbit-db/blob/master/API.md), [examples](https://github.com/orbitdb/orbit-db/tree/master/examples) and [Getting Started Guide](https://github.com/orbitdb/orbit-db/blob/master/GUIDE.md) to learn more about the changes. Note that we don't use semver for the npm module, so be sure to lock your orbit-db dependency to the previous release if you don't want to upgrade.
### Write-permissions
OrbitDB now has write-permissioned databases! \o/ This gives us verifiable, distributed databases and data structures enabling tons of new use cases and possibilities. User-owned data collections, feeds and lists, State and Payment Channels, and many more!
Permissions are defined by public keys and databases in OrbitDB support one or multiple write keys per database. Each database update is signed with a write-access key and the signature is verified by the clients against access control information. Next step is to extend the access control functionality to include read permissions. Read more about [Access Control](https://github.com/orbitdb/orbit-db/blob/master/GUIDE.md#access-control) and [Keys](https://github.com/orbitdb/orbit-db/blob/master/GUIDE.md#keys).
### Addresses
OrbitDB databases, their name and ID, are now addressed through a naming scheme:
```
/orbitdb/Qmd8TmZrWASypEp4Er9tgWP4kCNQnW4ncSnvjvyHQ3EVSU/my/database/hello
```
Read more about [Addresses](https://github.com/orbitdb/orbit-db/blob/master/GUIDE.md#address).
### Replication
The previous versions of OrbitDB had a flaky replication implementation which has been completely re-written for this release. We've made performance improvements and more importantly, peers now start syncing the database automatically and reliably.
### Performance
Several performance improvements made throughout OrbitDB's code base together with latest [IPFS](https://github.com/ipfs/js-ipfs) we're seeing much better throughput numbers in benchmarks. There are still many improvements to be made!
### Documentation and Examples
We've written a brand new [Getting Started Guide](https://github.com/orbitdb/orbit-db/blob/master/GUIDE.md) to work as a tutorial and a place to understand OrbitDB's features and usage. The [API documentation](https://github.com/orbitdb/orbit-db/blob/master/API.md) was also updated to reflect latest features and changes.
All [examples](https://github.com/orbitdb/orbit-db/tree/master/examples) were updated along with an [updated UI](https://raw.githubusercontent.com/orbitdb/orbit-db/feat/write-access/screenshots/example1.png) for the [browser demo](https://ipfs.io/ipfs/QmRosp97r8GGUEdj5Wvivrn5nBkuyajhRXFUcWCp5Zubbo/). [Another small browser demo](https://ipfs.io/ipfs/QmasHFRj6unJ3nSmtPn97tWDaQWEZw3W9Eh3gUgZktuZDZ/) was added and there's a [TodoMVC with Orbitdb example](https://github.com/orbitdb/orbit-db/issues/246) in the works.
### Code Base Improvements
Due to legacy reasons, OrbitDB previously used a wrapper module for IPFS called `ipfs-daemon`. We have removed and deprecated `ipfs-daemon` and are now using [js-ipfs](https://github.com/ipfs/js-ipfs) directly!
We've switched to using *async/await* in the code base throughout the modules. This means the minimum required version of Node.js is now 8.0.0. To use with older versions of Node.js, we provide an [ES5-compatible build](https://github.com/orbitdb/orbit-db/tree/master/dist/es5). We've also added support for logging, which can be turned on with `LOG=[debug|warn|error]` environment variable.
## v0.12.0
- IPFS pubsub
https://github.com/orbitdb/orbitdb/commits/v1.0.1

View File

@ -4,7 +4,7 @@
<img src="images/orbit_db_logo_color.png" width="256" />
</p>
[![Matrix](https://img.shields.io/matrix/orbit-db:matrix.org?label=chat%20on%20matrix)](https://app.element.io/#/room/#orbit-db:matrix.org) [![npm version](https://badge.fury.io/js/orbit-db.svg)](https://www.npmjs.com/package/@orbitdb/core) [![node](https://img.shields.io/node/v/orbit-db.svg)](https://www.npmjs.com/package/@orbitdb/core)
[![Matrix](https://img.shields.io/matrix/orbit-db%3Amatrix.org)](https://app.element.io/#/room/#orbit-db:matrix.org) [![npm (scoped)](https://img.shields.io/npm/v/%40orbitdb/core)](https://www.npmjs.com/package/@orbitdb/core) [![node-current (scoped)](https://img.shields.io/node/v/%40orbitdb/core)](https://www.npmjs.com/package/@orbitdb/core)
OrbitDB is a **serverless, distributed, peer-to-peer database**. OrbitDB uses [IPFS](https://ipfs.tech) as its data storage and [Libp2p Pubsub](https://docs.libp2p.io/concepts/pubsub/overview/) to automatically sync databases with peers. It's an eventually consistent database that uses [Merkle-CRDTs](https://arxiv.org/abs/2004.00107) for conflict-free database writes and merges making OrbitDB an excellent choice for p2p and decentralized apps, blockchain applications and [local-first](https://www.inkandswitch.com/local-first/) web applications.

View File

@ -65,6 +65,8 @@ const ipfsConfig = {
const db2 = await orbitdb2.open(db1.address)
const startTime2 = new Date().getTime()
let connected = false
const onJoin = async (peerId) => (connected = true)
@ -74,7 +76,6 @@ const ipfsConfig = {
await waitFor(() => connected, () => true)
console.log(`Iterate ${entryCount} events to replicate them`)
const startTime2 = new Date().getTime()
const all = []
for await (const { value } of db2.iterator()) {

View File

@ -2,7 +2,7 @@
<html lang="en">
<head>
<meta charset="utf-8">
<title>OrbitDB API - v1.0.0</title>
<title>OrbitDB API - v1.0</title>
<script src="scripts/prettify/prettify.js"> </script>
<script src="scripts/prettify/lang-css.js"> </script>

View File

@ -1,4 +1,4 @@
## OrbitDB API - v1.0.0
## OrbitDB API - v1.0
OrbitDB is a serverless, distributed, peer-to-peer database. OrbitDB uses IPFS
as its data storage and Libp2p Pubsub to automatically sync databases with peers. It's an eventually consistent database that uses Merkle-CRDTs for conflict-free database writes and merges making OrbitDB an excellent choice for p2p and decentralized apps, blockchain applications and local first web applications.

View File

@ -1,6 +1,6 @@
{
"name": "@orbitdb/core",
"version": "1.0.0",
"version": "1.0.1",
"description": "Distributed p2p database on IPFS",
"author": "Haad",
"license": "MIT",

View File

@ -79,7 +79,7 @@ const create = async (identity, id, payload, clock = null, next = [], refs = [])
entry.identity = identity.hash
entry.sig = signature
return _encodeEntry(entry)
return encode(entry)
}
/**
@ -148,10 +148,17 @@ const isEqual = (a, b) => {
*/
const decode = async (bytes) => {
const { value } = await Block.decode({ bytes, codec, hasher })
return _encodeEntry(value)
return encode(value)
}
const _encodeEntry = async (entry) => {
/**
* Encodes an Entry and adds bytes field to it
* @param {Entry} entry
* @return {module:Log~Entry}
* @memberof module:Log~Entry
* @private
*/
const encode = async (entry) => {
const { cid, bytes } = await Block.encode({ value: entry, codec, hasher })
const hash = cid.toString(hashStringEncoding)
const clock = Clock(entry.clock.id, entry.clock.time)
@ -167,6 +174,7 @@ export default {
create,
verify,
decode,
encode,
isEntry,
isEqual
}

View File

@ -35,6 +35,12 @@ const Heads = async ({ storage, heads }) => {
return newHeads
}
const remove = async (hash) => {
const currentHeads = await all()
const newHeads = currentHeads.filter(e => e.hash !== hash)
await set(newHeads)
}
const iterator = async function * () {
const it = storage.iterator()
for await (const [, bytes] of it) {
@ -66,6 +72,7 @@ const Heads = async ({ storage, heads }) => {
put,
set,
add,
remove,
iterator,
all,
clear,

View File

@ -128,7 +128,6 @@ const Log = async (identity, { logId, logHeads, access, entryStorage, headsStora
const bytes = await _entries.get(hash)
if (bytes) {
const entry = await Entry.decode(bytes)
await _index.put(hash, true)
return entry
}
}
@ -206,13 +205,13 @@ const Log = async (identity, { logId, logHeads, access, entryStorage, headsStora
if (!isLog(log)) {
throw new Error('Given argument is not an instance of Log')
}
if (_entries.merge) {
await _entries.merge(log.storage)
}
const heads = await log.heads()
for (const entry of heads) {
await joinEntry(entry)
}
if (_entries.merge) {
await _entries.merge(log.storage)
}
}
/**
@ -222,54 +221,86 @@ const Log = async (identity, { logId, logHeads, access, entryStorage, headsStora
*
* @example
*
* await log.join(entry)
* await log.joinEntry(entry)
*
* @memberof module:Log~Log
* @instance
*/
const joinEntry = async (entry) => {
const { hash } = entry
// Check if the entry is already in the log and return early if it is
const isAlreadyInTheLog = await has(hash)
/* 1. Check if the entry is already in the log and return early if it is */
const isAlreadyInTheLog = await has(entry.hash)
if (isAlreadyInTheLog) {
return false
} else {
// Check that the entry is not an entry that hasn't been indexed
const it = traverse(await heads(), (e) => e.next.includes(hash) || entry.next.includes(e.hash))
for await (const e of it) {
if (e.next.includes(hash)) {
await _index.put(hash, true)
return false
}
}
const verifyEntry = async (entry) => {
// Check that the Entry belongs to this Log
if (entry.id !== id) {
throw new Error(`Entry's id (${entry.id}) doesn't match the log's id (${id}).`)
}
// Verify if entry is allowed to be added to the log
const canAppend = await access.canAppend(entry)
if (!canAppend) {
throw new Error(`Could not append entry:\nKey "${entry.identity}" is not allowed to write to the log`)
}
// Verify signature for the entry
const isValid = await Entry.verify(identity, entry)
if (!isValid) {
throw new Error(`Could not validate signature for entry "${entry.hash}"`)
}
}
// Check that the Entry belongs to this Log
if (entry.id !== id) {
throw new Error(`Entry's id (${entry.id}) doesn't match the log's id (${id}).`)
}
// Verify if entry is allowed to be added to the log
const canAppend = await access.canAppend(entry)
if (!canAppend) {
throw new Error(`Could not append entry:\nKey "${entry.identity}" is not allowed to write to the log`)
}
// Verify signature for the entry
const isValid = await Entry.verify(identity, entry)
if (!isValid) {
throw new Error(`Could not validate signature for entry "${hash}"`)
/* 2. Verify the entry */
await verifyEntry(entry)
/* 3. Find missing entries and connections (=path in the DAG) to the current heads */
const headsHashes = (await heads()).map(e => e.hash)
const hashesToAdd = new Set([entry.hash])
const hashesToGet = new Set([...entry.next, ...entry.refs])
const connectedHeads = new Set()
const traverseAndVerify = async () => {
const getEntries = Array.from(hashesToGet.values()).filter(has).map(get)
const entries = await Promise.all(getEntries)
for (const e of entries) {
hashesToGet.delete(e.hash)
await verifyEntry(e)
hashesToAdd.add(e.hash)
for (const hash of [...e.next, ...e.refs]) {
const isInTheLog = await has(hash)
if (!isInTheLog && !hashesToAdd.has(hash)) {
hashesToGet.add(hash)
} else if (headsHashes.includes(hash)) {
connectedHeads.add(hash)
}
}
}
if (hashesToGet.size > 0) {
await traverseAndVerify()
}
}
// Add the new entry to heads (union with current heads)
const newHeads = await _heads.add(entry)
await traverseAndVerify()
if (!newHeads) {
return false
/* 4. Add missing entries to the index (=to the log) */
for (const hash of hashesToAdd.values()) {
await _index.put(hash, true)
}
// Add the new entry to the entry storage
await _entries.put(hash, entry.bytes)
// Add the new entry to the entry index
await _index.put(hash, true)
// We've added the entry to the log
/* 5. Remove heads which new entries are connect to */
for (const hash of connectedHeads.values()) {
await _heads.remove(hash)
}
/* 6. Add the new entry to heads (=union with current heads) */
await _heads.add(entry)
return true
}
@ -330,7 +361,7 @@ const Log = async (identity, { logId, logHeads, access, entryStorage, headsStora
// Add the next and refs fields from the fetched entries to the next round
toFetch = nexts
.filter(e => e != null)
.filter(e => e !== null && e !== undefined)
.reduce((res, acc) => Array.from(new Set([...res, ...acc.next, ...(useRefs ? acc.refs : [])])), [])
.filter(notIndexed)
// Add the fetched entries to the stack to be processed

View File

@ -1,8 +1,9 @@
import { strictEqual, notStrictEqual, deepStrictEqual } from 'assert'
import { rimraf } from 'rimraf'
import { copy } from 'fs-extra'
import { Log, Identities, KeyStore } from '../../src/index.js'
import { Log, Entry, Identities, KeyStore } from '../../src/index.js'
import { Clock } from '../../src/oplog/log.js'
import { MemoryStorage } from '../../src/storage/index.js'
import testKeysPath from '../fixtures/test-keys-path.js'
const keysPath = './testkeys'
@ -427,19 +428,549 @@ describe('Log - Join', async function () {
deepStrictEqual(values.map((e) => e.payload), expectedData)
})
it('has correct heads after joining logs', async () => {
it('doesn\'t add the given entry to the log when the given entry is already in the log', async () => {
const e1 = await log1.append('hello1')
const e2 = await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.join(log1)
const all1 = await log2.all()
deepStrictEqual(all1.length, 3)
deepStrictEqual(all1[0], e1)
deepStrictEqual(all1[1], e2)
deepStrictEqual(all1[2], e3)
await log2.joinEntry(e1)
await log2.joinEntry(e2)
await log2.joinEntry(e3)
const all2 = await log2.all()
deepStrictEqual(all2.length, 3)
deepStrictEqual(all2[0], e1)
deepStrictEqual(all2[1], e2)
deepStrictEqual(all2[2], e3)
})
it('doesn\'t add the given entry to the heads when the given entry is already in the log', async () => {
const e1 = await log1.append('hello1')
const e2 = await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.join(log1)
const heads1 = await log2.heads()
deepStrictEqual(heads1, [e3])
await log2.joinEntry(e1)
await log2.joinEntry(e2)
await log2.joinEntry(e3)
const heads2 = await log2.heads()
strictEqual(heads2.length, 1)
deepStrictEqual(heads2[0].hash, e3.hash)
})
it('joinEntry returns false when the given entry is already in the log', async () => {
const e1 = await log1.append('hello1')
const e2 = await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.join(log1)
const heads1 = await log2.heads()
deepStrictEqual(heads1, [e3])
const r1 = await log2.joinEntry(e1)
const r2 = await log2.joinEntry(e2)
const r3 = await log2.joinEntry(e3)
deepStrictEqual([r1, r2, r3].every(e => e === false), true)
})
it('replaces the heads if the given entry is a new head and has a direct path to the old head', async () => {
await log1.append('hello1')
await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.join(log1)
const heads1 = await log2.heads()
deepStrictEqual(heads1, [e3])
await log2.joinEntry(e1)
await log1.append('hello4')
await log1.append('hello5')
const e6 = await log1.append('hello6')
await log2.storage.merge(log1.storage)
await log2.joinEntry(e6)
const heads2 = await log2.heads()
const all = await log2.all()
strictEqual(heads2.length, 1)
deepStrictEqual(heads2[0].hash, e6.hash)
strictEqual(all.length, 6)
deepStrictEqual(all.map(e => e.payload), ['hello1', 'hello2', 'hello3', 'hello4', 'hello5', 'hello6'])
})
it('replaces a head when given entry is a new head and there are multiple current heads', async () => {
await log1.append('hello1')
await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.append('helloA')
await log2.append('helloB')
const eC = await log2.append('helloC')
await log3.join(log1)
await log3.join(log2)
const heads1 = await log3.heads()
deepStrictEqual(heads1, [eC, e3])
await log1.append('hello4')
await log1.append('hello5')
const e6 = await log1.append('hello6')
await log1.storage.merge(log3.storage)
await log3.storage.merge(log1.storage)
await log2.storage.merge(log1.storage)
await log2.storage.merge(log3.storage)
await log3.joinEntry(e6)
const heads2 = await log3.heads()
strictEqual(heads2.length, 2)
deepStrictEqual(heads2[0].hash, e6.hash)
deepStrictEqual(heads2[1].hash, eC.hash)
})
it('replaces both heads when given entries are new heads and there are two current heads', async () => {
await log1.append('hello1')
await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.append('helloA')
await log2.append('helloB')
const eC = await log2.append('helloC')
await log3.join(log1)
await log3.join(log2)
const heads1 = await log3.heads()
deepStrictEqual(heads1, [eC, e3])
await log1.append('hello4')
await log1.append('hello5')
const e6 = await log1.append('hello6')
await log2.append('helloD')
await log2.append('helloE')
const eF = await log2.append('helloF')
await log3.storage.merge(log1.storage)
await log3.storage.merge(log2.storage)
await log3.joinEntry(e6)
await log3.joinEntry(eF)
const heads2 = await log3.heads()
strictEqual(heads2.length, 2)
deepStrictEqual(heads2[0].hash, eF.hash)
deepStrictEqual(heads2[1].hash, e6.hash)
})
it('adds the given entry to the heads when forked logs have multiple heads', async () => {
await log1.append('hello1')
await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.join(log1)
const heads1 = await log1.heads()
const heads2 = await log2.heads()
deepStrictEqual(heads1, [e3])
deepStrictEqual(heads2, [e3])
await log2.append('helloX')
const eY = await log2.append('helloY')
await log1.append('hello4')
await log1.append('hello5')
const e6 = await log1.append('hello6')
await log2.storage.merge(log1.storage)
await log2.joinEntry(e6)
const heads3 = await log2.heads()
strictEqual(heads3.length, 2)
deepStrictEqual(heads3[0].hash, e6.hash)
deepStrictEqual(heads3[1].hash, eY.hash)
})
it('replaces one head but not the other when forked logs have multiple heads', async () => {
await log1.append('hello1')
await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.join(log1)
const heads1 = await log1.heads()
const heads2 = await log2.heads()
deepStrictEqual(heads1, [e3])
deepStrictEqual(heads2, [e3])
await log2.append('helloX')
const eY = await log2.append('helloY')
await log1.append('hello4')
await log1.append('hello5')
const e6 = await log1.append('hello6')
await log2.storage.merge(log1.storage)
await log2.joinEntry(e6)
const heads3 = await log2.heads()
strictEqual(heads3.length, 2)
deepStrictEqual(heads3[0].hash, e6.hash)
deepStrictEqual(heads3[1].hash, eY.hash)
await log1.append('hello7')
const e8 = await log1.append('hello8')
await log2.storage.merge(log1.storage)
await log2.joinEntry(e8)
const heads4 = await log2.heads()
strictEqual(heads4.length, 2)
deepStrictEqual(heads4[0].hash, e8.hash)
deepStrictEqual(heads4[1].hash, eY.hash)
})
it('doesn\'t add the joined entry to the log when previously joined logs have forks and multiple heads', async () => {
const e1 = await log1.append('hello1')
const e2 = await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.join(log1)
const heads1 = await log1.heads()
const heads2 = await log2.heads()
deepStrictEqual(heads1, [e3])
deepStrictEqual(heads2, [e3])
await log2.append('helloX')
const eY = await log2.append('helloY')
const e4 = await log1.append('hello4')
const e5 = await log1.append('hello5')
const e6 = await log1.append('hello6')
await log2.storage.merge(log1.storage)
await log2.joinEntry(e6)
const res5 = await log2.joinEntry(e5)
const res4 = await log2.joinEntry(e4)
const res3 = await log2.joinEntry(e3)
const res2 = await log2.joinEntry(e2)
const res1 = await log2.joinEntry(e1)
strictEqual(res1, false)
strictEqual(res2, false)
strictEqual(res3, false)
strictEqual(res4, false)
strictEqual(res5, false)
const heads3 = await log2.heads()
strictEqual(heads3.length, 2)
deepStrictEqual(heads3[0].hash, e6.hash)
deepStrictEqual(heads3[1].hash, eY.hash)
})
it('replaces both heads when forked logs have multiple heads', async () => {
await log1.append('hello1')
await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.append('helloA')
await log2.append('helloB')
const eC = await log2.append('helloC')
await log2.storage.merge(log1.storage)
await log2.joinEntry(e3)
const heads1 = await log2.heads()
strictEqual(heads1.length, 2)
deepStrictEqual(heads1[0].hash, eC.hash)
deepStrictEqual(heads1[1].hash, e3.hash)
await log1.append('hello4')
await log1.append('hello5')
const e6 = await log1.append('hello6')
await log2.append('helloD')
await log2.append('helloE')
const eF = await log2.append('helloF')
await log2.storage.merge(log1.storage)
await log2.joinEntry(e6)
const heads2 = await log2.heads()
strictEqual(heads2.length, 2)
deepStrictEqual(heads2[0].hash, eF.hash)
deepStrictEqual(heads2[1].hash, e6.hash)
})
describe('trying to join an entry with invalid preceeding entries', () => {
it('throws an error if an entry belongs to another log', async () => {
const headsStorage1 = await MemoryStorage()
const log0 = await Log(testIdentity2, { logId: 'Y' })
log1 = await Log(testIdentity, { logId: 'X', headsStorage: headsStorage1 })
log2 = await Log(testIdentity2, { logId: 'X' })
const e0 = await log0.append('helloA')
await log1.storage.merge(log0.storage)
await headsStorage1.put(e0.hash, e0.bytes)
await log1.append('hello1')
await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.storage.merge(log1.storage)
let err
try {
await log2.joinEntry(e3)
} catch (e) {
err = e
}
notStrictEqual(err, undefined)
strictEqual(err.message, 'Entry\'s id (Y) doesn\'t match the log\'s id (X).')
deepStrictEqual(await log2.all(), [])
deepStrictEqual(await log2.heads(), [])
})
it('throws an error if an entry doesn\'t pass access controller #1', async () => {
const canAppend = (entry) => {
if (entry.payload === 'hello1') {
return false
}
return true
}
log1 = await Log(testIdentity, { logId: 'X' })
log2 = await Log(testIdentity2, { logId: 'X', access: { canAppend } })
await log1.append('hello1')
await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.storage.merge(log1.storage)
let err
try {
await log2.joinEntry(e3)
} catch (e) {
err = e
}
notStrictEqual(err, undefined)
strictEqual(err.message, 'Could not append entry:\nKey "zdpuAvqN22Rxwx5EEenq6EyeydVKPKn43MXHzauuicjLEp8jP" is not allowed to write to the log')
deepStrictEqual(await log2.all(), [])
deepStrictEqual(await log2.heads(), [])
})
it('throws an error if an entry doesn\'t pass access controller #2', async () => {
const canAppend = (entry) => {
if (entry.payload === 'hello2') {
return false
}
return true
}
log1 = await Log(testIdentity, { logId: 'X' })
log2 = await Log(testIdentity2, { logId: 'X' })
log3 = await Log(testIdentity3, { logId: 'X', access: { canAppend } })
await log1.append('hello1')
await log1.append('hello2')
const e3 = await log1.append('hello3')
await log2.append('helloA')
await log2.append('helloB')
const eC = await log2.append('helloC')
await log3.storage.merge(log1.storage)
await log3.storage.merge(log2.storage)
await log3.joinEntry(eC)
await log2.storage.merge(log1.storage)
await log2.joinEntry(e3)
await log2.append('helloD')
await log2.append('helloE')
const eF = await log2.append('helloF')
await log3.storage.merge(log1.storage)
await log3.storage.merge(log2.storage)
let err
try {
await log3.joinEntry(eF)
} catch (e) {
err = e
}
notStrictEqual(err, undefined)
strictEqual(err.message, 'Could not append entry:\nKey "zdpuAvqN22Rxwx5EEenq6EyeydVKPKn43MXHzauuicjLEp8jP" is not allowed to write to the log')
deepStrictEqual((await log3.all()).map(e => e.payload), ['helloA', 'helloB', 'helloC'])
deepStrictEqual((await log3.heads()).map(e => e.payload), ['helloC'])
})
})
describe('throws an error if verification of an entry in given entry\'s history fails', async () => {
let e1, e3
let headsStorage1, headsStorage2
before(async () => {
headsStorage1 = await MemoryStorage()
headsStorage2 = await MemoryStorage()
log1 = await Log(testIdentity, { logId: 'X', entryStorage: headsStorage1 })
log2 = await Log(testIdentity2, { logId: 'X', entryStorage: headsStorage2 })
e1 = await log1.append('hello1')
await log1.append('hello2')
e3 = await log1.append('hello3')
})
it('throws an error if an entry doesn\'t have a payload field', async () => {
const e = Object.assign({}, e1)
delete e.payload
delete e.bytes
delete e.hash
const ee = await Entry.encode(e)
await headsStorage1.put(e1.hash, ee.bytes)
await log2.storage.merge(headsStorage1)
let err
try {
await log2.joinEntry(e3)
} catch (e) {
err = e
}
notStrictEqual(err, undefined)
strictEqual(err.message, 'Invalid Log entry')
deepStrictEqual(await log2.all(), [])
deepStrictEqual(await log2.heads(), [])
})
it('throws an error if an entry doesn\'t have a key field', async () => {
const e = Object.assign({}, e1)
delete e.key
delete e.bytes
delete e.hash
const ee = await Entry.encode(e)
await headsStorage1.put(e1.hash, ee.bytes)
await log2.storage.merge(headsStorage1)
let err
try {
await log2.joinEntry(e3)
} catch (e) {
err = e
}
notStrictEqual(err, undefined)
strictEqual(err.message, 'Entry doesn\'t have a key')
deepStrictEqual(await log2.all(), [])
deepStrictEqual(await log2.heads(), [])
})
it('throws an error if an entry doesn\'t have a signature field', async () => {
const e = Object.assign({}, e1)
delete e.sig
delete e.bytes
delete e.hash
const ee = await Entry.encode(e)
await headsStorage1.put(e1.hash, ee.bytes)
await log2.storage.merge(headsStorage1)
let err
try {
await log2.joinEntry(e3)
} catch (e) {
err = e
}
notStrictEqual(err, undefined)
strictEqual(err.message, 'Entry doesn\'t have a signature')
deepStrictEqual(await log2.all(), [])
deepStrictEqual(await log2.heads(), [])
})
it('throws an error if an entry signature doesn\'t verify', async () => {
const e = Object.assign({}, e1)
e.sig = '1234567890'
delete e.bytes
delete e.hash
const ee = await Entry.encode(e)
await headsStorage1.put(e1.hash, ee.bytes)
await log2.storage.merge(headsStorage1)
let err
try {
await log2.joinEntry(e3)
} catch (e) {
err = e
}
notStrictEqual(err, undefined)
strictEqual(err.message, 'Could not validate signature for entry "zdpuAvkAJ8C46cnGdtFpcBratA5MqK7CcjqCJjjmuKuFvZir3"')
deepStrictEqual(await log2.all(), [])
deepStrictEqual(await log2.heads(), [])
})
})
})

View File

@ -6,7 +6,7 @@ import waitFor from './utils/wait-for.js'
import createHelia from './utils/create-helia.js'
describe('Replicating databases', function () {
this.timeout(30000)
this.timeout(10000)
let ipfs1, ipfs2
let orbitdb1, orbitdb2
@ -61,16 +61,7 @@ describe('Replicating databases', function () {
let replicated = false
const onJoin = async (peerId, heads) => {
const head = (await db2.log.heads())[0]
if (head && head.clock.time === amount) {
replicated = true
}
}
const onUpdated = (entry) => {
if (entry.clock.time === amount) {
replicated = true
}
replicated = true
}
const onError = (err) => {
@ -80,7 +71,6 @@ describe('Replicating databases', function () {
db2 = await orbitdb2.open(db1.address)
db2.events.on('join', onJoin)
db2.events.on('update', onUpdated)
db2.events.on('error', onError)
db1.events.on('error', onError)