From 12e9256884c458c0234b22d6f597c3039485bf05 Mon Sep 17 00:00:00 2001 From: ci-bot Date: Thu, 25 Aug 2022 09:51:45 +0000 Subject: [PATCH] Deployed 9a5fc67 to 5.x with MkDocs 1.3.1 and mike 1.1.2 --- .../dependency-injection/index.html | 21 +-- 5.x/architecture/features/cli/index.html | 6 +- .../features/http-handler/index.html | 8 +- .../features/initialization/index.html | 14 +- .../protocol/authorization/index.html | 15 ++- .../features/protocol/parsing/index.html | 12 +- .../protocol/resource-store/index.html | 25 ++-- 5.x/architecture/overview/index.html | 32 +++-- 5.x/contributing/making-changes/index.html | 10 +- 5.x/contributing/release/index.html | 12 +- 5.x/index.html | 4 +- 5.x/search/search_index.json | 2 +- 5.x/sitemap.xml | 36 ++--- 5.x/sitemap.xml.gz | Bin 426 -> 426 bytes 5.x/usage/client-credentials/index.html | 4 +- 5.x/usage/example-requests/index.html | 126 ++++++++---------- 5.x/usage/identity-provider/index.html | 16 +-- 5.x/usage/metadata/index.html | 19 +-- 5.x/usage/seeding-pods/index.html | 16 +-- 19 files changed, 192 insertions(+), 186 deletions(-) diff --git a/5.x/architecture/dependency-injection/index.html b/5.x/architecture/dependency-injection/index.html index 852ee064c..415873f3f 100644 --- a/5.x/architecture/dependency-injection/index.html +++ b/5.x/architecture/dependency-injection/index.html @@ -952,7 +952,7 @@ to link all class instances together, and uses Components-Generator.js to automatically generate the necessary description configurations of all classes. This framework allows us to configure our components in a JSON file. -The advantage of this is that changing the configuration of components does not require any changes to the code, +The advantage of this is that changing the configuration of components does not require any changes to the code, as one can just change the default configuration file, or provide a custom configuration file.

More information can be found in the Components.js documentation, but a summarized overview can be found below.

@@ -967,7 +967,7 @@ or they will not get a component file.

All the community server configurations can be found in the config folder. That folder also contains information about how different pre-defined configurations can be used.

-

A single component in such a configuration file might look as follows: +

A single component in such a configuration file might look as follows:

{
   "comment": "Storage used for account management.",
   "@id": "urn:solid-server:default:AccountStorage",
@@ -976,14 +976,17 @@ That folder also contains information about how different pre-defined configurat
   "baseUrl": { "@id": "urn:solid-server:default:variable:baseUrl" },
   "container": "/.internal/accounts/"
 }
-

-

With the corresponding constructor of the JsonResourceStorage class: + +

With the corresponding constructor of the JsonResourceStorage class:

public constructor(source: ResourceStore, baseUrl: string, container: string)
-

-

The important elements here are the following: -* "comment": (optional) A description of this component. -* "@id": (optional) A unique identifier of this component, which allows it to be used as parameter values in different places. -* "@type": The class name of the component. This must be a TypeScript class name that is exported via index.ts.

+ +

The important elements here are the following:

+

As you can see from the constructor, the other fields are direct mappings from the constructor parameters. source references another object, which we refer to using its identifier urn:solid-server:default:ResourceStore. baseUrl is just a string, but here we use a variable that was set before calling Components.js diff --git a/5.x/architecture/features/cli/index.html b/5.x/architecture/features/cli/index.html index 93b13391b..ca4386a83 100644 --- a/5.x/architecture/features/cli/index.html +++ b/5.x/architecture/features/cli/index.html @@ -981,9 +981,9 @@ These then get converted into Components.js variables and are used to instantiat KeyExtractor("<br>KeyExtractor") AssetPathExtractor("<br>AssetPathExtractor") end -

The CliResolver (urn:solid-server-app-setup:default:CliResolver) is simply a way -to combine both the CliExtractor (urn:solid-server-app-setup:default:CliExtractor) -and ShorthandResolver (urn:solid-server-app-setup:default:ShorthandResolver) +

The CliResolver (urn:solid-server-app-setup:default:CliResolver) is simply a way +to combine both the CliExtractor (urn:solid-server-app-setup:default:CliExtractor) +and ShorthandResolver (urn:solid-server-app-setup:default:ShorthandResolver) into a single object and has no other function.

Which arguments are supported and which Components.js variables are generated can depend on the configuration that is being used. diff --git a/5.x/architecture/features/http-handler/index.html b/5.x/architecture/features/http-handler/index.html index e0990e1c4..d777895af 100644 --- a/5.x/architecture/features/http-handler/index.html +++ b/5.x/architecture/features/http-handler/index.html @@ -1004,7 +1004,7 @@

Handling HTTP requests

-

The direction of the arrows was changed slightly here to make the graph readable. +

The direction of the arrows was changed slightly here to make the graph readable.

flowchart LR
   HttpHandler("<strong>HttpHandler</strong><br>SequenceHandler")
   HttpHandler --> HttpHandlerArgs
@@ -1032,7 +1032,7 @@
   SetupHandler --> OidcHandler
   OidcHandler --> AuthResourceHttpHandler
   AuthResourceHttpHandler --> IdentityProviderHttpHandler
-  IdentityProviderHttpHandler --> LdpHandler

+ IdentityProviderHttpHandler --> LdpHandler

The HttpHandler is responsible for handling an incoming HTTP request. The request will always first go through the Middleware, where certain required headers will be added such as CORS headers.

@@ -1041,11 +1041,11 @@ to find the first handler that understands the request, with the LdpHandler at the bottom being the catch-all default.

StaticAssetHandler

The urn:solid-server:default:StaticAssetHandler matches exact URLs to static assets which require no further logic. -An example of this is the favicon, where the /favicon.ico URL +An example of this is the favicon, where the /favicon.ico URL is directed to the favicon file at /templates/images/favicon.ico. It can also map entire folders to a specific path, such as /.well-known/css/styles/ which contains all stylesheets.

SetupHandler

-

The urn:solid-server:default:SetupHandler is responsible +

The urn:solid-server:default:SetupHandler is responsible for redirecting all requests to /setup until setup is finished, thereby ensuring that setup needs to be finished before anything else can be done on the server, and handling the actual setup request that is sent to /setup. diff --git a/5.x/architecture/features/initialization/index.html b/5.x/architecture/features/initialization/index.html index 9da278286..bdad20586 100644 --- a/5.x/architecture/features/initialization/index.html +++ b/5.x/architecture/features/initialization/index.html @@ -1040,7 +1040,7 @@ while the WorkerInitializer will trigger for every worker thread. Although if your server setup is single-threaded, which is the default, there is no relevant difference between those two.

PrimaryInitializer

-

flowchart TD
+
flowchart TD
   PrimaryInitializer("<strong>PrimaryInitializer</strong><br>ProcessHandler")
   PrimaryInitializer --> PrimarySequenceInitializer("<strong>PrimarySequenceInitializer</strong><br>SequenceHandler")
   PrimarySequenceInitializer --> PrimarySequenceInitializerArgs
@@ -1054,9 +1054,9 @@ there is no relevant difference between those two.

CleanupInitializer --> PrimaryParallelInitializer PrimaryParallelInitializer --> WorkerManager
-The above is a simplification of all the initializers that are present in the PrimaryInitializer +

The above is a simplification of all the initializers that are present in the PrimaryInitializer as there are several smaller initializers that also trigger but are less relevant here.

-

The CleanupInitializer is an initializer that cleans up anything +

The CleanupInitializer is an initializer that cleans up anything that might have remained from a previous server start and could impact behaviour. Relevant components in other parts of the configuration are responsible for adding themselves to this array if needed. @@ -1065,7 +1065,7 @@ An example of this is file-based locking components which might need to remove a This makes it easier for users to add initializers by being able to append to its handlers.

The WorkerManager is responsible for setting up the worker threads, if any.

WorkerInitializer

-

flowchart TD
+
flowchart TD
   WorkerInitializer("<strong>WorkerInitializer</strong><br>ProcessHandler")
   WorkerInitializer --> WorkerSequenceInitializer("<strong>WorkerSequenceInitializer</strong><br>SequenceHandler")
   WorkerSequenceInitializer --> WorkerSequenceInitializerArgs
@@ -1077,13 +1077,13 @@ This makes it easier for users to add initializers by being able to append to it
   end
 
   WorkerParallelInitializer --> ServerInitializer
-The WorkerInitializer is quite similar to the PrimaryInitializer but triggers once per worker thread. +

The WorkerInitializer is quite similar to the PrimaryInitializer but triggers once per worker thread. Like the PrimaryParallelInitializer, the WorkerParallelInitializer can be used to add any custom initializers that need to run.

ServerInitializer

The ServerInitializer is the initializer that finally starts up the server by listening to the relevant port, once all the initialization described above is finished. -This is an example of a component that differs based on some of the choices made during configuration. +This is an example of a component that differs based on some of the choices made during configuration.

flowchart TD
   ServerInitializer("<strong>ServerInitializer</strong><br>ServerInitializer")
   ServerInitializer --> WebSocketServerFactory("<strong>ServerFactory</strong><br>WebSocketServerFactory")
@@ -1092,7 +1092,7 @@ This is an example of a component that differs based on some of the choices made
 
   ServerInitializer2("<strong>ServerInitializer</strong><br>ServerInitializer")
   ServerInitializer2 ---> BaseHttpServerFactory2("<strong>ServerFactory</strong><br>BaseHttpServerFactory")
-  BaseHttpServerFactory2 --> HttpHandler2("<strong>HttpHandler</strong><br><i>HttpHandler</i>")

+ BaseHttpServerFactory2 --> HttpHandler2("<strong>HttpHandler</strong><br><i>HttpHandler</i>")

Depending on if the configurations necessary for websockets are imported or not, the urn:solid-server:default:ServerFactory identifier will point to a different resource. There will always be a BaseHttpServerFactory that starts the HTTP(S) server, diff --git a/5.x/architecture/features/protocol/authorization/index.html b/5.x/architecture/features/protocol/authorization/index.html index 34b3eaab9..0f6fc76ce 100644 --- a/5.x/architecture/features/protocol/authorization/index.html +++ b/5.x/architecture/features/protocol/authorization/index.html @@ -1018,8 +1018,8 @@ making a requesting agent have multiple credentials.

direction LR DPoPWebIdExtractor("<br>DPoPWebIdExtractor") --> BearerWebIdExtractor("<br>BearerWebIdExtractor") end
-

Both of the WebID extractors make use of -the (access-token-verifier)[https://github.com/CommunitySolidServer/access-token-verifier] library +

Both of the WebID extractors make use of +the access-token-verifier library to parse incoming tokens based on the Solid-OIDC specification. Besides those there are always the public credentials, which everyone has. All these credentials then get combined into a single union object.

@@ -1042,14 +1042,15 @@ based on the request contents.

PatchModesExtractor("<strong>PatchModesExtractor</strong><br><i>ModesExtractor</i>") --> MethodModesExtractor("<br>MethodModesExtractor") end

The IntermediateCreateExtractor is responsible if requests try to create intermediate containers with a single request. -E.g., a PUT request to /foo/bar/baz should create both the /foo/ and /foo/bar/ containers in case they do not exist yet. +E.g., a PUT request to /foo/bar/baz should create both the /foo/ and /foo/bar/ containers in case they do not +exist yet. This extractor makes sure that create permissions are also checked on those containers.

Modes can usually be determined based on just the HTTP methods, which is what the MethodModesExtractor does. A GET request will always need the read mode for example.

The only exception are PATCH requests, where the necessary modes depend on the body and the PATCH type.

-
flowchart TD  
+
flowchart TD
   PatchModesExtractor("<strong>PatchModesExtractor</strong><br>WaterfallHandler") --> PatchModesExtractorArgs
   subgraph PatchModesExtractorArgs[" "]
     N3PatchModesExtractor("<br>N3PatchModesExtractor")
@@ -1065,7 +1066,7 @@ Each reader returns all the information it can find based on the resources and m
 In the default configuration the following readers are combined when WebACL is enabled as authorization method.
 In case authorization is disabled by changing the authorization import to config/ldp/authorization/allow-all.json,
 this diagram is just a class that always returns all permissions.

-
flowchart TD  
+
flowchart TD
   PermissionReader("<strong>PermissionReader</strong><br>AuxiliaryReader")
   PermissionReader --> UnionPermissionReader("<br>UnionPermissionReader")
   UnionPermissionReader --> UnionPermissionReaderArgs
@@ -1085,7 +1086,7 @@ An example of this is if the requests targets the metadata of a resource.

If one reader rejects a specific mode and another allows it, the rejection takes priority.

The PathBasedReader rejects all permissions for certain paths. This is used to prevent access to the internal data of the server.

-

The OwnerPermissionReader makes sure owners always have control access +

The OwnerPermissionReader makes sure owners always have control access to the pods they created on the server. Users will always be able to modify the ACL resources in their pod, even if they accidentally removed their own access.

@@ -1096,7 +1097,7 @@ while deleting a resource requires write permissions there.

In case the target is an ACL resource, control permissions need to be checked, no matter what mode was generated by the ModesExtractor. The WebAclAuxiliaryReader makes sure this conversion happens.

-

Finally, the WebAclReader implements +

Finally, the WebAclReader implements the efffective ACL resource algorithm and returns the permissions it finds in that resource. In case no ACL resource is found this indicates a configuration error and no permissions will be granted.

diff --git a/5.x/architecture/features/protocol/parsing/index.html b/5.x/architecture/features/protocol/parsing/index.html index dcacabf27..1cbbc8c7d 100644 --- a/5.x/architecture/features/protocol/parsing/index.html +++ b/5.x/architecture/features/protocol/parsing/index.html @@ -1047,12 +1047,14 @@ It follows these 3 steps:

  1. Use the RequestParser to convert the incoming data into an Operation.
  2. Send the Operation to the AuthorizingHttpHandler to receive either a Representation if the operation was a success, - or an Error in case something went wrong.
  3. + or an Error in case something went wrong.
    • In case of an error the ErrorHandler will convert the Error into a ResponseDescription.
    • +
    +
  4. Use the ResponseWriter to output the ResponseDescription as an HTTP response.

Parsing the request

-

flowchart TD
+
flowchart TD
   RequestParser("<strong>RequestParser</strong><br>BasicRequestParser") --> RequestParserArgs
   subgraph RequestParserArgs[" "]
     TargetExtractor("<strong>TargetExtractor</strong><br>OriginalUrlExtractor")
@@ -1063,7 +1065,7 @@ It follows these 3 steps:

end OriginalUrlExtractor --> IdentifierStrategy("<strong>IdentifierStrategy</strong><br><i>IdentifierStrategy</i>")
-The BasicRequestParser is mostly an aggregator of multiple smaller parsers that each handle a very specific part.

+

The BasicRequestParser is mostly an aggregator of multiple smaller parsers that each handle a very specific part.

URL

This is a single class, the OriginalUrlExtractor, but fulfills the very important role of making sure input URLs are handled consistently.

@@ -1078,7 +1080,7 @@ This can differ depending on if the server uses subdomains or not.

The AcceptPreferenceParser parses the Accept header and all the relevant Accept-* headers. These will all be put into the preferences field of an Operation object. These will later be used to handle the content negotiation.

-

For example, when sending an Accept: text/turtle; q=0.9 header, +

For example, when sending an Accept: text/turtle; q=0.9 header, this wil result in the preferences object { type: { 'text/turtle': 0.9 } }.

Headers

Several other headers can have relevant metadata, @@ -1106,7 +1108,7 @@ These will later be used to make sure the request should be aborted or not.

and if not it will throw an error.

In case an error gets thrown, this will be caught by the ErrorHandler and converted into a ResponseDescription. The request preferences will be used to make sure the serialization is one that is preferred.

-

Either way we will have a ResponseDescription, +

Either way we will have a ResponseDescription, which will be sent to the BasicResponseWriter to convert into output headers, data and a status code.

To convert the metadata into headers, it uses a MetadataWriter, which functions as the reverse of the MetadataParser mentioned above: diff --git a/5.x/architecture/features/protocol/resource-store/index.html b/5.x/architecture/features/protocol/resource-store/index.html index e1129a782..203be2045 100644 --- a/5.x/architecture/features/protocol/resource-store/index.html +++ b/5.x/architecture/features/protocol/resource-store/index.html @@ -1031,15 +1031,18 @@ The default configurations come with the following stores:

and all the entries in config/storage/backend.

MonitoringStore

This store emits the events that are necessary to emit notifications when resources change.

-

There are 4 different events that can be emitted: -- this.emit('changed', identifier, activity): is emitted for every resource that was changed/effected by a call to the store. - With activity being undefined or one of the available ActivityStream terms. -- this.emit(AS.Create, identifier): is emitted for every resource that was created by the call to the store. -- this.emit(AS.Update, identifier): is emitted for every resource that was updated by the call to the store. -- this.emit(AS.Delete, identifier): is emitted for every resource that was deleted by the call to the store.

+

There are 4 different events that can be emitted:

+
    +
  • this.emit('changed', identifier, activity): is emitted for every resource that was changed/effected by a + call to the store. + With activity being undefined or one of the available ActivityStream terms.
  • +
  • this.emit(AS.Create, identifier): is emitted for every resource that was created by the call to the store.
  • +
  • this.emit(AS.Update, identifier): is emitted for every resource that was updated by the call to the store.
  • +
  • this.emit(AS.Delete, identifier): is emitted for every resource that was deleted by the call to the store.
  • +

A changed event will always be emitted if a resource was changed. -If the correct metadata was set by the source ResourceStore, an additional field will be sent along indicating the type of change, -and an additional corresponding event will be emitted, depending on what the change is.

+If the correct metadata was set by the source ResourceStore, an additional field will be sent along indicating the +type of change, and an additional corresponding event will be emitted, depending on what the change is.

IndexRepresentationStore

When doing a GET request on a container /container/, this container returns the contents of /container/index.html instead if HTML is the preferred response type. @@ -1070,15 +1073,13 @@ the SPARQL backend only accepts triples for example, that is also handled here

DataAccessorBasedStore

Large parts of the requirements of the Solid protocol specification are resolved by the DataAccessorBasedStore: -POST only working on containers, -DELETE not working on non-empty containers, +POST only working on containers, DELETE not working on non-empty containers, generating ldp:contains triples for containers, etc. Most of this behaviour is independent of how the data is stored which is why it can be generalized here. The store's name comes from the fact that it makes use of DataAccessors to handle the read/write of resources. A DataAccessor is a simple interface that only focuses on handling the data. It does not concern itself with any of the necessary Solid checks as it assumes those have already been made. -This means that if a storage method needs to be supported, -only a new DataAccessor needs to be made, +This means that if a storage method needs to be supported, only a new DataAccessor needs to be made, after which it can be plugged into the rest of the server.

diff --git a/5.x/architecture/overview/index.html b/5.x/architecture/overview/index.html index 34819fabe..f7fae56bb 100644 --- a/5.x/architecture/overview/index.html +++ b/5.x/architecture/overview/index.html @@ -572,6 +572,13 @@ Architecture diagrams + + +
  • + + Features + +
  • @@ -915,6 +922,13 @@ Architecture diagrams + + +
  • + + Features + +
  • @@ -932,7 +946,7 @@

    Architecture overview

    -

    The initial architecture document the project was started from can be found +

    The initial architecture document the project was started from can be found here. Many things have been added since the original inception of the project, but the core ideas within that document are still valid.

    @@ -961,17 +975,19 @@ We will use an example below to explain the formatting used throughout the archi end

    Below is a summary of how to interpret such diagrams:

      -
    • Rounded red box: component instantiated in the Components.js configuration.
    • -
    • First line:
        +
      • Rounded red box: component instantiated in the Components.js configuration.
          +
        • First line:
          • Bold text: shorthand of the instance identifier. In case the full URI is not specified, - it can usually be found by prepending urn:solid-server:default: to the shorthand identifier.
          • + it can usually be found by prepending urn:solid-server:default: to the shorthand identifier.
          • (empty): this instance has no identifier and is defined in the same place as its parent.
        • -
        • Second line:
            +
          • Second line:
            • Regular text: The class of this instance.
            • -
            • Italic text: The interface of this instance. - Will be used if the actual class is not relevant for the explanation or can differ.
            • +
            • Italic text: The interface of this instance. + Will be used if the actual class is not relevant for the explanation or can differ.
            • +
            +
        • Square grey box: the parameters of the linked instance.
        • @@ -980,7 +996,7 @@ We will use an example below to explain the formatting used throughout the archi

          For example, in the above, LdpHandler is a shorthand for the actual identifier urn:solid-server:default:LdpHandler and is an instance of ParsingHttpHandler. It has 4 parameters, one of which has no identifier but is an instance of AuthorizingHttpHandler.

          -

          Features

          +

          Features

          Below are the sections that go deeper into the features of the server and how those work.

          • How Command Line Arguments are parsed and used
          • diff --git a/5.x/contributing/making-changes/index.html b/5.x/contributing/making-changes/index.html index 595f2c29b..4c004f4cc 100644 --- a/5.x/contributing/making-changes/index.html +++ b/5.x/contributing/making-changes/index.html @@ -888,26 +888,26 @@

            Pull requests

            The community server is fully written in Typescript.

            All changes should be done through -pull requests.

            +pull requests.

            We recommend first discussing a possible solution in the relevant issue to reduce the amount of changes that will be requested.

            In case any of your changes are breaking, make sure you target the next major branch (versions/x.0.0) instead of the main branch. Breaking changes include: changing interface/class signatures, -potentially breaking external custom configurations, +potentially breaking external custom configurations, and breaking how internal data is stored. In case of doubt you probably want to target the next major branch.

            We make use of Conventional Commits.

            -

            Don't forget to update the release notes +

            Don't forget to update the release notes when adding new major features. Also update any relevant documentation in case this is needed.

            -

            When making changes to a pull request, +

            When making changes to a pull request, we prefer to update the existing commits with a rebase instead of appending new commits, this way the PR can be rebased directly onto the target branch instead of needing to be squashed.

            There are strict requirements from the linter and the test coverage before a PR is valid. These are configured to run automatically when trying to commit to git. Although there are no tests for it (yet), we strongly advice documenting with TSdoc.

            -

            If a list of entries is alphabetically sorted, +

            If a list of entries is alphabetically sorted, such as index.ts, make sure it stays that way.

            diff --git a/5.x/contributing/release/index.html b/5.x/contributing/release/index.html index 5c527b334..b0c042f20 100644 --- a/5.x/contributing/release/index.html +++ b/5.x/contributing/release/index.html @@ -899,14 +899,18 @@
          • Verify that the RELEASE_NOTES.md are correct.
          • npm run release -- -r major
              -
            • Automatically updates Components.js references to the new version. Committed with chore(release): Update configs to vx.0.0.
            • -
            • Updates the package.json, and generates the new entries in CHANGELOG.md. Commited with chore(release): Release version vx.0.0 of the npm package
            • -
            • Optionally run npx commit-and-tag-version -r major --dry-run to preview the commands that will be run and the changes to CHANGELOG.md.
            • +
            • Automatically updates Components.js references to the new version. + Committed with chore(release): Update configs to vx.0.0.
            • +
            • Updates the package.json, and generates the new entries in CHANGELOG.md. + Commited with chore(release): Release version vx.0.0 of the npm package
            • +
            • Optionally run npx commit-and-tag-version -r major --dry-run to preview the commands that will be run + and the changes to CHANGELOG.md.
          • The postrelease script will now prompt you to manually edit the CHANGELOG.md.
            • All entries are added in separate sections of the new release according to their commit prefixes.
            • -
            • Re-organize the entries accordingly, referencing previous releases. Most of the entries in Chores and Documentation can be removed.
            • +
            • Re-organize the entries accordingly, referencing previous releases. Most of the entries in Chores and + Documentation can be removed.
            • Press any key in your terminal when your changes are ready.
            • The postrelease script will amend the release commit, create an annotated tag and push changes to origin.
            diff --git a/5.x/index.html b/5.x/index.html index d6f6c170a..b8037731c 100644 --- a/5.x/index.html +++ b/5.x/index.html @@ -925,7 +925,7 @@ This is a good way to get started with the server and its setup.

            If you want to know what is new in the latest version, you can check out the release notes for a high level overview and information on how to migrate your configuration to the next version. -A list that includes all minor changes can be found in +A list that includes all minor changes can be found in the changelog

            Using the server

            -

            For core developers with push access only:

            +

            For core developers with push access only:

            diff --git a/5.x/search/search_index.json b/5.x/search/search_index.json index 7a16a1576..b700af09c 100644 --- a/5.x/search/search_index.json +++ b/5.x/search/search_index.json @@ -1 +1 @@ -{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome \u00b6 Welcome to the Community Solid Server! Here we will cover many aspects of the server, such as how to propose changes, what the architecture looks like, and how to use many of the features the server provides. The documentation here is still incomplete both in content and structure, so feel free to open a discussion about things you want to see added. While we try to update this documentation together with updates in the code, it is always possible we miss something, so please report it if you find incorrect information or links that no longer work. An introductory tutorial that gives a quick overview of the Solid and CSS basics can be found here . This is a good way to get started with the server and its setup. If you want to know what is new in the latest version, you can check out the release notes for a high level overview and information on how to migrate your configuration to the next version. A list that includes all minor changes can be found in the changelog Using the server \u00b6 Basic example HTTP requests Editing the metadata of a resource How to use the Identity Provider How to automate authentication How to automatically seed pods on startup What the internals look like \u00b6 How the server uses dependency injection What the architecture looks like Making changes \u00b6 How to make changes to the repository For core developers with push access only: How to release a new version","title":"Welcome"},{"location":"#welcome","text":"Welcome to the Community Solid Server! Here we will cover many aspects of the server, such as how to propose changes, what the architecture looks like, and how to use many of the features the server provides. The documentation here is still incomplete both in content and structure, so feel free to open a discussion about things you want to see added. While we try to update this documentation together with updates in the code, it is always possible we miss something, so please report it if you find incorrect information or links that no longer work. An introductory tutorial that gives a quick overview of the Solid and CSS basics can be found here . This is a good way to get started with the server and its setup. If you want to know what is new in the latest version, you can check out the release notes for a high level overview and information on how to migrate your configuration to the next version. A list that includes all minor changes can be found in the changelog","title":"Welcome"},{"location":"#using-the-server","text":"Basic example HTTP requests Editing the metadata of a resource How to use the Identity Provider How to automate authentication How to automatically seed pods on startup","title":"Using the server"},{"location":"#what-the-internals-look-like","text":"How the server uses dependency injection What the architecture looks like","title":"What the internals look like"},{"location":"#making-changes","text":"How to make changes to the repository For core developers with push access only: How to release a new version","title":"Making changes"},{"location":"architecture/core/","text":"Core building blocks \u00b6 There are several core building blocks used in many places of the server. These are described here. Handlers \u00b6 A very important building block that gets reused in many places is the AsyncHandler . The idea is that a handler has 2 important functions. canHandle determines if this class is capable of correctly handling the request, and throws an error if it can not. For example, a class that converts JSON-LD to turtle can handle all requests containing JSON-LD data, but does not know what to do with a request that contains a JPEG. The second function is handle where the class executes on the input data and returns the result. If an error gets thrown here it means there is an issue with the input. For example, if the input data claims to be JSON-LD but is actually not. The power of using this interface really shines when using certain utility classes. The one we use the most is the WaterfallHandler , which takes as input a list of handlers of the same type. The input and output of a WaterfallHandler is the same as those of its inputs, meaning it can be used in the same places. When doing a canHandle call, it will iterate over all its input handlers to find the first one where the canHandle call succeeds, and when calling handle it will return the result of that specific handler. This allows us to chain together many handlers that each have their specific niche, such as handler that each support a specific HTTP method (GET/PUT/POST/etc.), or handlers that only take requests targeting a specific subset of URLs. To the parent class it will look like it has a handler that supports all methods, while in practice it will be a WaterfallHandler containing all these separate handlers. Some other utility classes are the ParallelHandler that runs all handlers simultaneously, and the SequenceHandler that runs all of them one after the other. Since multiple handlers are executed here, these only work for handlers that have no output. Streams \u00b6 Almost all data is handled in a streaming fashion. This allows us to work with very large resources without having to fully load them in memory, a client could be reading data that is being returned by the server while the server is still reading the file. Internally this means we are mostly handling data as Readable objects. We actually use Guarded which is an internal format we created to help us with error handling. Such streams can be created using utility functions such as guardStream and guardedStreamFrom . Similarly, we have a pipeSafely to pipe streams in such a way that also helps with errors.","title":"Core"},{"location":"architecture/core/#core-building-blocks","text":"There are several core building blocks used in many places of the server. These are described here.","title":"Core building blocks"},{"location":"architecture/core/#handlers","text":"A very important building block that gets reused in many places is the AsyncHandler . The idea is that a handler has 2 important functions. canHandle determines if this class is capable of correctly handling the request, and throws an error if it can not. For example, a class that converts JSON-LD to turtle can handle all requests containing JSON-LD data, but does not know what to do with a request that contains a JPEG. The second function is handle where the class executes on the input data and returns the result. If an error gets thrown here it means there is an issue with the input. For example, if the input data claims to be JSON-LD but is actually not. The power of using this interface really shines when using certain utility classes. The one we use the most is the WaterfallHandler , which takes as input a list of handlers of the same type. The input and output of a WaterfallHandler is the same as those of its inputs, meaning it can be used in the same places. When doing a canHandle call, it will iterate over all its input handlers to find the first one where the canHandle call succeeds, and when calling handle it will return the result of that specific handler. This allows us to chain together many handlers that each have their specific niche, such as handler that each support a specific HTTP method (GET/PUT/POST/etc.), or handlers that only take requests targeting a specific subset of URLs. To the parent class it will look like it has a handler that supports all methods, while in practice it will be a WaterfallHandler containing all these separate handlers. Some other utility classes are the ParallelHandler that runs all handlers simultaneously, and the SequenceHandler that runs all of them one after the other. Since multiple handlers are executed here, these only work for handlers that have no output.","title":"Handlers"},{"location":"architecture/core/#streams","text":"Almost all data is handled in a streaming fashion. This allows us to work with very large resources without having to fully load them in memory, a client could be reading data that is being returned by the server while the server is still reading the file. Internally this means we are mostly handling data as Readable objects. We actually use Guarded which is an internal format we created to help us with error handling. Such streams can be created using utility functions such as guardStream and guardedStreamFrom . Similarly, we have a pipeSafely to pipe streams in such a way that also helps with errors.","title":"Streams"},{"location":"architecture/dependency-injection/","text":"Dependency injection \u00b6 The community server uses the dependency injection framework Components.js to link all class instances together, and uses Components-Generator.js to automatically generate the necessary description configurations of all classes. This framework allows us to configure our components in a JSON file. The advantage of this is that changing the configuration of components does not require any changes to the code, as one can just change the default configuration file, or provide a custom configuration file. More information can be found in the Components.js documentation , but a summarized overview can be found below. Component files \u00b6 Components.js requires a component file for every class you might want to instantiate. Fortunately those get generated automatically by Components-Generator.js. Calling npm run build will call the generator and generate those JSON-LD files in the dist folder. The generator uses the index.ts , so new classes always have to be added there or they will not get a component file. Configuration files \u00b6 Configuration files are how we tell Components.js which classes to instantiate and link together. All the community server configurations can be found in the config folder . That folder also contains information about how different pre-defined configurations can be used. A single component in such a configuration file might look as follows: { \"comment\" : \"Storage used for account management.\" , \"@id\" : \"urn:solid-server:default:AccountStorage\" , \"@type\" : \"JsonResourceStorage\" , \"source\" : { \"@id\" : \"urn:solid-server:default:ResourceStore\" }, \"baseUrl\" : { \"@id\" : \"urn:solid-server:default:variable:baseUrl\" }, \"container\" : \"/.internal/accounts/\" } With the corresponding constructor of the JsonResourceStorage class: public constructor ( source : ResourceStore , baseUrl : string , container : string ) The important elements here are the following: * \"comment\" : (optional) A description of this component. * \"@id\" : (optional) A unique identifier of this component, which allows it to be used as parameter values in different places. * \"@type\" : The class name of the component. This must be a TypeScript class name that is exported via index.ts . As you can see from the constructor, the other fields are direct mappings from the constructor parameters. source references another object, which we refer to using its identifier urn:solid-server:default:ResourceStore . baseUrl is just a string, but here we use a variable that was set before calling Components.js which is why it also references an @id . These variables are set when starting up the server, based on the command line parameters.","title":"Dependency injection"},{"location":"architecture/dependency-injection/#dependency-injection","text":"The community server uses the dependency injection framework Components.js to link all class instances together, and uses Components-Generator.js to automatically generate the necessary description configurations of all classes. This framework allows us to configure our components in a JSON file. The advantage of this is that changing the configuration of components does not require any changes to the code, as one can just change the default configuration file, or provide a custom configuration file. More information can be found in the Components.js documentation , but a summarized overview can be found below.","title":"Dependency injection"},{"location":"architecture/dependency-injection/#component-files","text":"Components.js requires a component file for every class you might want to instantiate. Fortunately those get generated automatically by Components-Generator.js. Calling npm run build will call the generator and generate those JSON-LD files in the dist folder. The generator uses the index.ts , so new classes always have to be added there or they will not get a component file.","title":"Component files"},{"location":"architecture/dependency-injection/#configuration-files","text":"Configuration files are how we tell Components.js which classes to instantiate and link together. All the community server configurations can be found in the config folder . That folder also contains information about how different pre-defined configurations can be used. A single component in such a configuration file might look as follows: { \"comment\" : \"Storage used for account management.\" , \"@id\" : \"urn:solid-server:default:AccountStorage\" , \"@type\" : \"JsonResourceStorage\" , \"source\" : { \"@id\" : \"urn:solid-server:default:ResourceStore\" }, \"baseUrl\" : { \"@id\" : \"urn:solid-server:default:variable:baseUrl\" }, \"container\" : \"/.internal/accounts/\" } With the corresponding constructor of the JsonResourceStorage class: public constructor ( source : ResourceStore , baseUrl : string , container : string ) The important elements here are the following: * \"comment\" : (optional) A description of this component. * \"@id\" : (optional) A unique identifier of this component, which allows it to be used as parameter values in different places. * \"@type\" : The class name of the component. This must be a TypeScript class name that is exported via index.ts . As you can see from the constructor, the other fields are direct mappings from the constructor parameters. source references another object, which we refer to using its identifier urn:solid-server:default:ResourceStore . baseUrl is just a string, but here we use a variable that was set before calling Components.js which is why it also references an @id . These variables are set when starting up the server, based on the command line parameters.","title":"Configuration files"},{"location":"architecture/overview/","text":"Architecture overview \u00b6 The initial architecture document the project was started from can be found here . Many things have been added since the original inception of the project, but the core ideas within that document are still valid. As can be seen from the architecture, an important idea is the modularity of all components. No actual implementations are defined there, only their interfaces. Making all the components independent of each other in such a way provides us with an enormous flexibility: they can all be replaced by a different implementation, without impacting anything else. This is how we can provide many different configurations for the server, and why it is impossible to provide ready solutions for all possible combinations. Architecture diagrams \u00b6 Having a modular architecture makes it more difficult to give a complete architecture overview. We will limit ourselves to the more commonly used default configurations we provide, and in certain cases we might give examples of what differences there are based on what configurations are being imported. To do this we will make use of architecture diagrams. We will use an example below to explain the formatting used throughout the architecture documentation: flowchart TD LdpHandler(\"LdpHandler
            ParsingHttphandler\") LdpHandler --> LdpHandlerArgs subgraph LdpHandlerArgs[\" \"] RequestParser(\"RequestParser
            BasicRequestParser\") Auth(\"
            AuthorizingHttpHandler\") ErrorHandler(\"ErrorHandler
            ErrorHandler\") ResponseWriter(\"ResponseWriter
            BasicResponseWriter\") end Below is a summary of how to interpret such diagrams: Rounded red box: component instantiated in the Components.js configuration . First line: Bold text : shorthand of the instance identifier. In case the full URI is not specified, it can usually be found by prepending urn:solid-server:default: to the shorthand identifier. (empty): this instance has no identifier and is defined in the same place as its parent. Second line: Regular text: The class of this instance. Italic text : The interface of this instance. Will be used if the actual class is not relevant for the explanation or can differ. Square grey box: the parameters of the linked instance. Arrow: links an instance to its parameters. Can also be used to indicate the order of parameters if relevant. For example, in the above, LdpHandler is a shorthand for the actual identifier urn:solid-server:default:LdpHandler and is an instance of ParsingHttpHandler . It has 4 parameters, one of which has no identifier but is an instance of AuthorizingHttpHandler . Features \u00b6 Below are the sections that go deeper into the features of the server and how those work. How Command Line Arguments are parsed and used How the server is initialized and started How HTTP requests are handled How the server handles a standard Solid request","title":"Overview"},{"location":"architecture/overview/#architecture-overview","text":"The initial architecture document the project was started from can be found here . Many things have been added since the original inception of the project, but the core ideas within that document are still valid. As can be seen from the architecture, an important idea is the modularity of all components. No actual implementations are defined there, only their interfaces. Making all the components independent of each other in such a way provides us with an enormous flexibility: they can all be replaced by a different implementation, without impacting anything else. This is how we can provide many different configurations for the server, and why it is impossible to provide ready solutions for all possible combinations.","title":"Architecture overview"},{"location":"architecture/overview/#architecture-diagrams","text":"Having a modular architecture makes it more difficult to give a complete architecture overview. We will limit ourselves to the more commonly used default configurations we provide, and in certain cases we might give examples of what differences there are based on what configurations are being imported. To do this we will make use of architecture diagrams. We will use an example below to explain the formatting used throughout the architecture documentation: flowchart TD LdpHandler(\"LdpHandler
            ParsingHttphandler\") LdpHandler --> LdpHandlerArgs subgraph LdpHandlerArgs[\" \"] RequestParser(\"RequestParser
            BasicRequestParser\") Auth(\"
            AuthorizingHttpHandler\") ErrorHandler(\"ErrorHandler
            ErrorHandler\") ResponseWriter(\"ResponseWriter
            BasicResponseWriter\") end Below is a summary of how to interpret such diagrams: Rounded red box: component instantiated in the Components.js configuration . First line: Bold text : shorthand of the instance identifier. In case the full URI is not specified, it can usually be found by prepending urn:solid-server:default: to the shorthand identifier. (empty): this instance has no identifier and is defined in the same place as its parent. Second line: Regular text: The class of this instance. Italic text : The interface of this instance. Will be used if the actual class is not relevant for the explanation or can differ. Square grey box: the parameters of the linked instance. Arrow: links an instance to its parameters. Can also be used to indicate the order of parameters if relevant. For example, in the above, LdpHandler is a shorthand for the actual identifier urn:solid-server:default:LdpHandler and is an instance of ParsingHttpHandler . It has 4 parameters, one of which has no identifier but is an instance of AuthorizingHttpHandler .","title":"Architecture diagrams"},{"location":"architecture/overview/#features","text":"Below are the sections that go deeper into the features of the server and how those work. How Command Line Arguments are parsed and used How the server is initialized and started How HTTP requests are handled How the server handles a standard Solid request","title":"Features"},{"location":"architecture/features/cli/","text":"Parsing Command line arguments \u00b6 When starting the server, the application actually uses Components.js twice to instantiate components. The first instantiation is used to parse the command line arguments. These then get converted into Components.js variables and are used to instantiate the actual server. Architecture \u00b6 flowchart TD CliResolver(\"CliResolver
            CliResolver\") CliResolver --> CliResolverArgs subgraph CliResolverArgs[\" \"] CliExtractor(\"CliExtractor
            YargsCliExtractor\") ShorthandResolver(\"ShorthandResolver
            CombinedShorthandResolver\") end ShorthandResolver --> ShorthandResolverArgs subgraph ShorthandResolverArgs[\" \"] BaseUrlExtractor(\"
            BaseUrlExtractor\") KeyExtractor(\"
            KeyExtractor\") AssetPathExtractor(\"
            AssetPathExtractor\") end The CliResolver ( urn:solid-server-app-setup:default:CliResolver ) is simply a way to combine both the CliExtractor ( urn:solid-server-app-setup:default:CliExtractor ) and ShorthandResolver ( urn:solid-server-app-setup:default:ShorthandResolver ) into a single object and has no other function. Which arguments are supported and which Components.js variables are generated can depend on the configuration that is being used. For example, for an HTTPS server additional arguments will be needed to specify the necessary key/cert files. CliResolver \u00b6 The CliResolver converts the incoming string of arguments into a key/value object. By default, a YargsCliExtractor is used, which makes use of the yargs library and is configured similarly. ShorthandResolver \u00b6 The ShorthandResolver uses the key/value object that was generated above to generate Components.js variable bindings. A CombinedShorthandResolver combines the results of multiple ShorthandExtractor by mapping their values to specific variables. For example, a BaseUrlExtractor will be used to extract the value for baseUrl , or port if no baseUrl value is provided, and use it to generate the value for the variable urn:solid-server:default:variable:baseUrl . These extractors are also where the default values for the server are defined. For example, BaseUrlExtractor will be instantiated with a default port of 3000 which will be used if no port is provided. The variables generated here will be used to initialize the server .","title":"Command line arguments"},{"location":"architecture/features/cli/#parsing-command-line-arguments","text":"When starting the server, the application actually uses Components.js twice to instantiate components. The first instantiation is used to parse the command line arguments. These then get converted into Components.js variables and are used to instantiate the actual server.","title":"Parsing Command line arguments"},{"location":"architecture/features/cli/#architecture","text":"flowchart TD CliResolver(\"CliResolver
            CliResolver\") CliResolver --> CliResolverArgs subgraph CliResolverArgs[\" \"] CliExtractor(\"CliExtractor
            YargsCliExtractor\") ShorthandResolver(\"ShorthandResolver
            CombinedShorthandResolver\") end ShorthandResolver --> ShorthandResolverArgs subgraph ShorthandResolverArgs[\" \"] BaseUrlExtractor(\"
            BaseUrlExtractor\") KeyExtractor(\"
            KeyExtractor\") AssetPathExtractor(\"
            AssetPathExtractor\") end The CliResolver ( urn:solid-server-app-setup:default:CliResolver ) is simply a way to combine both the CliExtractor ( urn:solid-server-app-setup:default:CliExtractor ) and ShorthandResolver ( urn:solid-server-app-setup:default:ShorthandResolver ) into a single object and has no other function. Which arguments are supported and which Components.js variables are generated can depend on the configuration that is being used. For example, for an HTTPS server additional arguments will be needed to specify the necessary key/cert files.","title":"Architecture"},{"location":"architecture/features/cli/#cliresolver","text":"The CliResolver converts the incoming string of arguments into a key/value object. By default, a YargsCliExtractor is used, which makes use of the yargs library and is configured similarly.","title":"CliResolver"},{"location":"architecture/features/cli/#shorthandresolver","text":"The ShorthandResolver uses the key/value object that was generated above to generate Components.js variable bindings. A CombinedShorthandResolver combines the results of multiple ShorthandExtractor by mapping their values to specific variables. For example, a BaseUrlExtractor will be used to extract the value for baseUrl , or port if no baseUrl value is provided, and use it to generate the value for the variable urn:solid-server:default:variable:baseUrl . These extractors are also where the default values for the server are defined. For example, BaseUrlExtractor will be instantiated with a default port of 3000 which will be used if no port is provided. The variables generated here will be used to initialize the server .","title":"ShorthandResolver"},{"location":"architecture/features/http-handler/","text":"Handling HTTP requests \u00b6 The direction of the arrows was changed slightly here to make the graph readable. flowchart LR HttpHandler(\"HttpHandler
            SequenceHandler\") HttpHandler --> HttpHandlerArgs subgraph HttpHandlerArgs[\" \"] direction LR Middleware(\"Middleware
            HttpHandler\") WaterfallHandler(\"
            WaterfallHandler\") end Middleware --> WaterfallHandler WaterfallHandler --> WaterfallHandlerArgs subgraph WaterfallHandlerArgs[\" \"] direction TB StaticAssetHandler(\"StaticAssetHandler
            StaticAssetHandler\") SetupHandler(\"SetupHandler
            HttpHandler\") OidcHandler(\"OidcHandler
            HttpHandler\") AuthResourceHttpHandler(\"AuthResourceHttpHandler
            HttpHandler\") IdentityProviderHttpHandler(\"IdentityProviderHttpHandler
            HttpHandler\") LdpHandler(\"LdpHandler
            HttpHandler\") end StaticAssetHandler --> SetupHandler SetupHandler --> OidcHandler OidcHandler --> AuthResourceHttpHandler AuthResourceHttpHandler --> IdentityProviderHttpHandler IdentityProviderHttpHandler --> LdpHandler The HttpHandler is responsible for handling an incoming HTTP request. The request will always first go through the Middleware , where certain required headers will be added such as CORS headers. After that it will go through the list in the WaterfallHandler to find the first handler that understands the request, with the LdpHandler at the bottom being the catch-all default. StaticAssetHandler \u00b6 The urn:solid-server:default:StaticAssetHandler matches exact URLs to static assets which require no further logic. An example of this is the favicon, where the /favicon.ico URL is directed to the favicon file at /templates/images/favicon.ico . It can also map entire folders to a specific path, such as /.well-known/css/styles/ which contains all stylesheets. SetupHandler \u00b6 The urn:solid-server:default:SetupHandler is responsible for redirecting all requests to /setup until setup is finished, thereby ensuring that setup needs to be finished before anything else can be done on the server, and handling the actual setup request that is sent to /setup . Once setup is finished, this handler will reject all requests and thus no longer be relevant. If the server is configured to not have setup enabled, the corresponding identifier will point to a handler that always rejects all requests. OidcHandler \u00b6 The urn:solid-server:default:OidcHandler handles all requests related to the Solid-OIDC specification . The OIDC component is configured to work on the /.oidc/ subpath, so this handler catches all those requests and sends them to the internal OIDC library that is used. AuthResourceHttpHandler \u00b6 The urn:solid-server:default:AuthResourceHttpHandler is identical to the urn:solid-server:default:LdpHandler which will be discussed below, but only handles resources relevant for authorization. In practice this means that is your server is configured to use Web Access Control for authorization, this handler will catch all requests targeting .acl resources. The reason these already need to be handled here is so these can also be used to allow authorization on the following handler(s). More on this can be found in the identity provider documentation IdentityProviderHttpHandler \u00b6 The urn:solid-server:default:IdentityProviderHttpHandler handles everything related to our custom identity provider API, such as registering, logging in, returning the relevant HTML pages, etc. All these requests are identified by being on the /idp/ subpath. More information on the API can be found in the identity provider documentation LdpHandler \u00b6 Once a request reaches the urn:solid-server:default:LdpHandler , the server assumes this is a standard Solid request according to the Solid protocol. A detailed description of what happens then can be found here","title":"HTTP requests"},{"location":"architecture/features/http-handler/#handling-http-requests","text":"The direction of the arrows was changed slightly here to make the graph readable. flowchart LR HttpHandler(\"HttpHandler
            SequenceHandler\") HttpHandler --> HttpHandlerArgs subgraph HttpHandlerArgs[\" \"] direction LR Middleware(\"Middleware
            HttpHandler\") WaterfallHandler(\"
            WaterfallHandler\") end Middleware --> WaterfallHandler WaterfallHandler --> WaterfallHandlerArgs subgraph WaterfallHandlerArgs[\" \"] direction TB StaticAssetHandler(\"StaticAssetHandler
            StaticAssetHandler\") SetupHandler(\"SetupHandler
            HttpHandler\") OidcHandler(\"OidcHandler
            HttpHandler\") AuthResourceHttpHandler(\"AuthResourceHttpHandler
            HttpHandler\") IdentityProviderHttpHandler(\"IdentityProviderHttpHandler
            HttpHandler\") LdpHandler(\"LdpHandler
            HttpHandler\") end StaticAssetHandler --> SetupHandler SetupHandler --> OidcHandler OidcHandler --> AuthResourceHttpHandler AuthResourceHttpHandler --> IdentityProviderHttpHandler IdentityProviderHttpHandler --> LdpHandler The HttpHandler is responsible for handling an incoming HTTP request. The request will always first go through the Middleware , where certain required headers will be added such as CORS headers. After that it will go through the list in the WaterfallHandler to find the first handler that understands the request, with the LdpHandler at the bottom being the catch-all default.","title":"Handling HTTP requests"},{"location":"architecture/features/http-handler/#staticassethandler","text":"The urn:solid-server:default:StaticAssetHandler matches exact URLs to static assets which require no further logic. An example of this is the favicon, where the /favicon.ico URL is directed to the favicon file at /templates/images/favicon.ico . It can also map entire folders to a specific path, such as /.well-known/css/styles/ which contains all stylesheets.","title":"StaticAssetHandler"},{"location":"architecture/features/http-handler/#setuphandler","text":"The urn:solid-server:default:SetupHandler is responsible for redirecting all requests to /setup until setup is finished, thereby ensuring that setup needs to be finished before anything else can be done on the server, and handling the actual setup request that is sent to /setup . Once setup is finished, this handler will reject all requests and thus no longer be relevant. If the server is configured to not have setup enabled, the corresponding identifier will point to a handler that always rejects all requests.","title":"SetupHandler"},{"location":"architecture/features/http-handler/#oidchandler","text":"The urn:solid-server:default:OidcHandler handles all requests related to the Solid-OIDC specification . The OIDC component is configured to work on the /.oidc/ subpath, so this handler catches all those requests and sends them to the internal OIDC library that is used.","title":"OidcHandler"},{"location":"architecture/features/http-handler/#authresourcehttphandler","text":"The urn:solid-server:default:AuthResourceHttpHandler is identical to the urn:solid-server:default:LdpHandler which will be discussed below, but only handles resources relevant for authorization. In practice this means that is your server is configured to use Web Access Control for authorization, this handler will catch all requests targeting .acl resources. The reason these already need to be handled here is so these can also be used to allow authorization on the following handler(s). More on this can be found in the identity provider documentation","title":"AuthResourceHttpHandler"},{"location":"architecture/features/http-handler/#identityproviderhttphandler","text":"The urn:solid-server:default:IdentityProviderHttpHandler handles everything related to our custom identity provider API, such as registering, logging in, returning the relevant HTML pages, etc. All these requests are identified by being on the /idp/ subpath. More information on the API can be found in the identity provider documentation","title":"IdentityProviderHttpHandler"},{"location":"architecture/features/http-handler/#ldphandler","text":"Once a request reaches the urn:solid-server:default:LdpHandler , the server assumes this is a standard Solid request according to the Solid protocol. A detailed description of what happens then can be found here","title":"LdpHandler"},{"location":"architecture/features/initialization/","text":"Server initialization \u00b6 When starting the server, multiple Initializers trigger to set up everything correctly, the last one of which starts listening to the specified port. Similarly, when stopping the server several Finalizers trigger to clean up where necessary, although the latter only happens when starting the server through code. App \u00b6 flowchart TD App(\"App
            App\") App --> AppArgs subgraph AppArgs[\" \"] Initializer(\"Initializer
            Initializer\") AppFinalizer(\"Finalizer
            Finalizer\") end App ( urn:solid-server:default:App ) is the main component that gets instantiated by Components.js. Every other component should be able to trace an instantiation path back to it if it also wants to be instantiated. It's only function is to contain an Initializer and Finalizer which get called by calling start / stop respectively. Initializer \u00b6 flowchart TD Initializer(\"Initializer
            SequenceHandler\") Initializer --> InitializerArgs subgraph InitializerArgs[\" \"] direction LR LoggerInitializer(\"LoggerInitializer
            LoggerInitializer\") PrimaryInitializer(\"PrimaryInitializer
            ProcessHandler\") WorkerInitializer(\"WorkerInitializer
            ProcessHandler\") end LoggerInitializer --> PrimaryInitializer PrimaryInitializer --> WorkerInitializer The very first thing that needs to happen is initializing the logger. Before this other classes will be unable to use logging. The PrimaryInitializer will only trigger once, in the primary worker thread, while the WorkerInitializer will trigger for every worker thread. Although if your server setup is single-threaded, which is the default, there is no relevant difference between those two. PrimaryInitializer \u00b6 flowchart TD PrimaryInitializer(\"PrimaryInitializer
            ProcessHandler\") PrimaryInitializer --> PrimarySequenceInitializer(\"PrimarySequenceInitializer
            SequenceHandler\") PrimarySequenceInitializer --> PrimarySequenceInitializerArgs subgraph PrimarySequenceInitializerArgs[\" \"] direction LR CleanupInitializer(\"CleanupInitializer
            SequenceHandler\") PrimaryParallelInitializer(\"PrimaryParallelInitializer
            ParallelHandler\") WorkerManager(\"WorkerManager
            WorkerManager\") end CleanupInitializer --> PrimaryParallelInitializer PrimaryParallelInitializer --> WorkerManager The above is a simplification of all the initializers that are present in the PrimaryInitializer as there are several smaller initializers that also trigger but are less relevant here. The CleanupInitializer is an initializer that cleans up anything that might have remained from a previous server start and could impact behaviour. Relevant components in other parts of the configuration are responsible for adding themselves to this array if needed. An example of this is file-based locking components which might need to remove any dangling locking files. The PrimaryParallelInitializer can be used to add any initializers to that have to happen in the primary process. This makes it easier for users to add initializers by being able to append to its handlers. The WorkerManager is responsible for setting up the worker threads, if any. WorkerInitializer \u00b6 flowchart TD WorkerInitializer(\"WorkerInitializer
            ProcessHandler\") WorkerInitializer --> WorkerSequenceInitializer(\"WorkerSequenceInitializer
            SequenceHandler\") WorkerSequenceInitializer --> WorkerSequenceInitializerArgs subgraph WorkerSequenceInitializerArgs[\" \"] direction LR WorkerParallelInitializer(\"WorkerParallelInitializer
            ParallelHandler\") ServerInitializer(\"ServerInitializer
            ServerInitializer\") end WorkerParallelInitializer --> ServerInitializer The WorkerInitializer is quite similar to the PrimaryInitializer but triggers once per worker thread. Like the PrimaryParallelInitializer , the WorkerParallelInitializer can be used to add any custom initializers that need to run. ServerInitializer \u00b6 The ServerInitializer is the initializer that finally starts up the server by listening to the relevant port, once all the initialization described above is finished. This is an example of a component that differs based on some of the choices made during configuration. flowchart TD ServerInitializer(\"ServerInitializer
            ServerInitializer\") ServerInitializer --> WebSocketServerFactory(\"ServerFactory
            WebSocketServerFactory\") WebSocketServerFactory --> BaseHttpServerFactory(\"
            BaseHttpServerFactory\") BaseHttpServerFactory --> HttpHandler(\"HttpHandler
            HttpHandler\") ServerInitializer2(\"ServerInitializer
            ServerInitializer\") ServerInitializer2 ---> BaseHttpServerFactory2(\"ServerFactory
            BaseHttpServerFactory\") BaseHttpServerFactory2 --> HttpHandler2(\"HttpHandler
            HttpHandler\") Depending on if the configurations necessary for websockets are imported or not, the urn:solid-server:default:ServerFactory identifier will point to a different resource. There will always be a BaseHttpServerFactory that starts the HTTP(S) server, but there might also be a WebSocketServerFactory wrapped around it to handle websocket support. Although not indicated here, the parameters for initializing the BaseHttpServerFactory might also differ in case an HTTPS configuration is imported. The HttpHandler it takes as input is responsible for how HTTP requests get resolved .","title":"Server initialization"},{"location":"architecture/features/initialization/#server-initialization","text":"When starting the server, multiple Initializers trigger to set up everything correctly, the last one of which starts listening to the specified port. Similarly, when stopping the server several Finalizers trigger to clean up where necessary, although the latter only happens when starting the server through code.","title":"Server initialization"},{"location":"architecture/features/initialization/#app","text":"flowchart TD App(\"App
            App\") App --> AppArgs subgraph AppArgs[\" \"] Initializer(\"Initializer
            Initializer\") AppFinalizer(\"Finalizer
            Finalizer\") end App ( urn:solid-server:default:App ) is the main component that gets instantiated by Components.js. Every other component should be able to trace an instantiation path back to it if it also wants to be instantiated. It's only function is to contain an Initializer and Finalizer which get called by calling start / stop respectively.","title":"App"},{"location":"architecture/features/initialization/#initializer","text":"flowchart TD Initializer(\"Initializer
            SequenceHandler\") Initializer --> InitializerArgs subgraph InitializerArgs[\" \"] direction LR LoggerInitializer(\"LoggerInitializer
            LoggerInitializer\") PrimaryInitializer(\"PrimaryInitializer
            ProcessHandler\") WorkerInitializer(\"WorkerInitializer
            ProcessHandler\") end LoggerInitializer --> PrimaryInitializer PrimaryInitializer --> WorkerInitializer The very first thing that needs to happen is initializing the logger. Before this other classes will be unable to use logging. The PrimaryInitializer will only trigger once, in the primary worker thread, while the WorkerInitializer will trigger for every worker thread. Although if your server setup is single-threaded, which is the default, there is no relevant difference between those two.","title":"Initializer"},{"location":"architecture/features/initialization/#primaryinitializer","text":"flowchart TD PrimaryInitializer(\"PrimaryInitializer
            ProcessHandler\") PrimaryInitializer --> PrimarySequenceInitializer(\"PrimarySequenceInitializer
            SequenceHandler\") PrimarySequenceInitializer --> PrimarySequenceInitializerArgs subgraph PrimarySequenceInitializerArgs[\" \"] direction LR CleanupInitializer(\"CleanupInitializer
            SequenceHandler\") PrimaryParallelInitializer(\"PrimaryParallelInitializer
            ParallelHandler\") WorkerManager(\"WorkerManager
            WorkerManager\") end CleanupInitializer --> PrimaryParallelInitializer PrimaryParallelInitializer --> WorkerManager The above is a simplification of all the initializers that are present in the PrimaryInitializer as there are several smaller initializers that also trigger but are less relevant here. The CleanupInitializer is an initializer that cleans up anything that might have remained from a previous server start and could impact behaviour. Relevant components in other parts of the configuration are responsible for adding themselves to this array if needed. An example of this is file-based locking components which might need to remove any dangling locking files. The PrimaryParallelInitializer can be used to add any initializers to that have to happen in the primary process. This makes it easier for users to add initializers by being able to append to its handlers. The WorkerManager is responsible for setting up the worker threads, if any.","title":"PrimaryInitializer"},{"location":"architecture/features/initialization/#workerinitializer","text":"flowchart TD WorkerInitializer(\"WorkerInitializer
            ProcessHandler\") WorkerInitializer --> WorkerSequenceInitializer(\"WorkerSequenceInitializer
            SequenceHandler\") WorkerSequenceInitializer --> WorkerSequenceInitializerArgs subgraph WorkerSequenceInitializerArgs[\" \"] direction LR WorkerParallelInitializer(\"WorkerParallelInitializer
            ParallelHandler\") ServerInitializer(\"ServerInitializer
            ServerInitializer\") end WorkerParallelInitializer --> ServerInitializer The WorkerInitializer is quite similar to the PrimaryInitializer but triggers once per worker thread. Like the PrimaryParallelInitializer , the WorkerParallelInitializer can be used to add any custom initializers that need to run.","title":"WorkerInitializer"},{"location":"architecture/features/initialization/#serverinitializer","text":"The ServerInitializer is the initializer that finally starts up the server by listening to the relevant port, once all the initialization described above is finished. This is an example of a component that differs based on some of the choices made during configuration. flowchart TD ServerInitializer(\"ServerInitializer
            ServerInitializer\") ServerInitializer --> WebSocketServerFactory(\"ServerFactory
            WebSocketServerFactory\") WebSocketServerFactory --> BaseHttpServerFactory(\"
            BaseHttpServerFactory\") BaseHttpServerFactory --> HttpHandler(\"HttpHandler
            HttpHandler\") ServerInitializer2(\"ServerInitializer
            ServerInitializer\") ServerInitializer2 ---> BaseHttpServerFactory2(\"ServerFactory
            BaseHttpServerFactory\") BaseHttpServerFactory2 --> HttpHandler2(\"HttpHandler
            HttpHandler\") Depending on if the configurations necessary for websockets are imported or not, the urn:solid-server:default:ServerFactory identifier will point to a different resource. There will always be a BaseHttpServerFactory that starts the HTTP(S) server, but there might also be a WebSocketServerFactory wrapped around it to handle websocket support. Although not indicated here, the parameters for initializing the BaseHttpServerFactory might also differ in case an HTTPS configuration is imported. The HttpHandler it takes as input is responsible for how HTTP requests get resolved .","title":"ServerInitializer"},{"location":"architecture/features/protocol/authorization/","text":"Authorization \u00b6 flowchart TD AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") AuthorizingHttpHandler --> AuthorizingHttpHandlerArgs subgraph AuthorizingHttpHandlerArgs[\" \"] CredentialsExtractor(\"CredentialsExtractor
            CredentialsExtractor\") ModesExtractor(\"ModesExtractor
            ModesExtractor\") PermissionReader(\"PermissionReader
            PermissionReader\") Authorizer(\"Authorizer
            PermissionBasedAuthorizer\") OperationHttpHandler(\"
            OperationHttpHandler\") end Authorization is usually handled by the AuthorizingHttpHandler , which receives a parsed HTTP request in the form of an Operation . It goes through the following steps: A CredentialsExtractor identifies the credentials of the agent making the call. A ModesExtractor finds which access modes are needed for which resources. A PermissionReader determines the permissions the agent has on the targeted resources. The above results are compared in an Authorizer . If the request is allowed, call the OperationHttpHandler , otherwise throw an error. Authentication \u00b6 There are multiple CredentialsExtractor s that each determine identity in a different way. Potentially multiple extractors can apply, making a requesting agent have multiple credentials. The diagram below shows the default configuration if authentication is enabled. flowchart TD CredentialsExtractor(\"CredentialsExtractor
            UnionCredentialsExtractor\") CredentialsExtractor --> CredentialsExtractorArgs subgraph CredentialsExtractorArgs[\" \"] WaterfallHandler(\"
            WaterfallHandler\") PublicCredentialsExtractor(\"
            PublicCredentialsExtractor\") end WaterfallHandler --> WaterfallHandlerArgs subgraph WaterfallHandlerArgs[\" \"] direction LR DPoPWebIdExtractor(\"
            DPoPWebIdExtractor\") --> BearerWebIdExtractor(\"
            BearerWebIdExtractor\") end Both of the WebID extractors make use of the ( access-token-verifier )[https://github.com/CommunitySolidServer/access-token-verifier] library to parse incoming tokens based on the Solid-OIDC specification . Besides those there are always the public credentials, which everyone has. All these credentials then get combined into a single union object. If successful, a CredentialsExtractor will return a key/value map linking the type of credentials to their specific values. There are also debug configuration options available that can be used to simulate credentials. These can be enabled as different options through the config/ldp/authentication imports. Modes extraction \u00b6 Access modes are a predefined list of read , write , append , create and delete . The ModesExtractor determine which modes will be necessary and for which resources, based on the request contents. flowchart TD ModesExtractor(\"ModesExtractor
            IntermediateCreateExtractor\") ModesExtractor --> HttpModesExtractor(\"HttpModesExtractor
            WaterfallHandler\") HttpModesExtractor --> HttpModesExtractorArgs subgraph HttpModesExtractorArgs[\" \"] direction LR PatchModesExtractor(\"PatchModesExtractor
            ModesExtractor\") --> MethodModesExtractor(\"
            MethodModesExtractor\") end The IntermediateCreateExtractor is responsible if requests try to create intermediate containers with a single request. E.g., a PUT request to /foo/bar/baz should create both the /foo/ and /foo/bar/ containers in case they do not exist yet. This extractor makes sure that create permissions are also checked on those containers. Modes can usually be determined based on just the HTTP methods, which is what the MethodModesExtractor does. A GET request will always need the read mode for example. The only exception are PATCH requests, where the necessary modes depend on the body and the PATCH type. flowchart TD PatchModesExtractor(\"PatchModesExtractor
            WaterfallHandler\") --> PatchModesExtractorArgs subgraph PatchModesExtractorArgs[\" \"] N3PatchModesExtractor(\"
            N3PatchModesExtractor\") SparqlUpdateModesExtractor(\"
            SparqlUpdateModesExtractor\") end The server supports both N3 Patch and SPARQL Update PATCH requests. In both cases it will parse the bodies to determine what the impact would be of the request and what modes it requires. Permission reading \u00b6 PermissionReaders take the input of the above to determine which permissions are available for which credentials. The modes from the previous step are not yet needed, but can be used as optimization as we only need to know if we have permission on those modes. Each reader returns all the information it can find based on the resources and modes it receives. In the default configuration the following readers are combined when WebACL is enabled as authorization method. In case authorization is disabled by changing the authorization import to config/ldp/authorization/allow-all.json , this diagram is just a class that always returns all permissions. flowchart TD PermissionReader(\"PermissionReader
            AuxiliaryReader\") PermissionReader --> UnionPermissionReader(\"
            UnionPermissionReader\") UnionPermissionReader --> UnionPermissionReaderArgs subgraph UnionPermissionReaderArgs[\" \"] PathBasedReader(\"PathBasedReader
            PathBasedReader\") OwnerPermissionReader(\"OwnerPermissionReader
            OwnerPermissionReader\") WrappedWebAclReader(\"WrappedWebAclReader
            ParentContainerReader\") end WrappedWebAclReader --> WebAclAuxiliaryReader(\"WebAclAuxiliaryReader
            WebAclAuxiliaryReader\") WebAclAuxiliaryReader --> WebAclReader(\"WebAclReader
            WebAclReader\") The first thing that happens is that if the target is an auxiliary resource that uses the authorization of its subject resource, the AuxiliaryReader inserts that identifier instead. An example of this is if the requests targets the metadata of a resource. The UnionPermissionReader then combines the results of its readers into a single permission object. If one reader rejects a specific mode and another allows it, the rejection takes priority. The PathBasedReader rejects all permissions for certain paths. This is used to prevent access to the internal data of the server. The OwnerPermissionReader makes sure owners always have control access to the pods they created on the server . Users will always be able to modify the ACL resources in their pod, even if they accidentally removed their own access. The final readers are specifically relevant for the WebACL algorithm. The ParentContainerReader checks the permissions on a parent resource if required: creating a resource requires append permissions on the parent container, while deleting a resource requires write permissions there. In case the target is an ACL resource, control permissions need to be checked, no matter what mode was generated by the ModesExtractor . The WebAclAuxiliaryReader makes sure this conversion happens. Finally, the WebAclReader implements the efffective ACL resource algorithm and returns the permissions it finds in that resource. In case no ACL resource is found this indicates a configuration error and no permissions will be granted. Authorization \u00b6 All the results of the previous steps then get combined in the PermissionBasedAuthorizer to either allow or reject a request. If no permissions are found for a requested mode, or they are explicitly forbidden, a 401/403 will be returned, depending on if the agent was logged in or not.","title":"Authorization"},{"location":"architecture/features/protocol/authorization/#authorization","text":"flowchart TD AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") AuthorizingHttpHandler --> AuthorizingHttpHandlerArgs subgraph AuthorizingHttpHandlerArgs[\" \"] CredentialsExtractor(\"CredentialsExtractor
            CredentialsExtractor\") ModesExtractor(\"ModesExtractor
            ModesExtractor\") PermissionReader(\"PermissionReader
            PermissionReader\") Authorizer(\"Authorizer
            PermissionBasedAuthorizer\") OperationHttpHandler(\"
            OperationHttpHandler\") end Authorization is usually handled by the AuthorizingHttpHandler , which receives a parsed HTTP request in the form of an Operation . It goes through the following steps: A CredentialsExtractor identifies the credentials of the agent making the call. A ModesExtractor finds which access modes are needed for which resources. A PermissionReader determines the permissions the agent has on the targeted resources. The above results are compared in an Authorizer . If the request is allowed, call the OperationHttpHandler , otherwise throw an error.","title":"Authorization"},{"location":"architecture/features/protocol/authorization/#authentication","text":"There are multiple CredentialsExtractor s that each determine identity in a different way. Potentially multiple extractors can apply, making a requesting agent have multiple credentials. The diagram below shows the default configuration if authentication is enabled. flowchart TD CredentialsExtractor(\"CredentialsExtractor
            UnionCredentialsExtractor\") CredentialsExtractor --> CredentialsExtractorArgs subgraph CredentialsExtractorArgs[\" \"] WaterfallHandler(\"
            WaterfallHandler\") PublicCredentialsExtractor(\"
            PublicCredentialsExtractor\") end WaterfallHandler --> WaterfallHandlerArgs subgraph WaterfallHandlerArgs[\" \"] direction LR DPoPWebIdExtractor(\"
            DPoPWebIdExtractor\") --> BearerWebIdExtractor(\"
            BearerWebIdExtractor\") end Both of the WebID extractors make use of the ( access-token-verifier )[https://github.com/CommunitySolidServer/access-token-verifier] library to parse incoming tokens based on the Solid-OIDC specification . Besides those there are always the public credentials, which everyone has. All these credentials then get combined into a single union object. If successful, a CredentialsExtractor will return a key/value map linking the type of credentials to their specific values. There are also debug configuration options available that can be used to simulate credentials. These can be enabled as different options through the config/ldp/authentication imports.","title":"Authentication"},{"location":"architecture/features/protocol/authorization/#modes-extraction","text":"Access modes are a predefined list of read , write , append , create and delete . The ModesExtractor determine which modes will be necessary and for which resources, based on the request contents. flowchart TD ModesExtractor(\"ModesExtractor
            IntermediateCreateExtractor\") ModesExtractor --> HttpModesExtractor(\"HttpModesExtractor
            WaterfallHandler\") HttpModesExtractor --> HttpModesExtractorArgs subgraph HttpModesExtractorArgs[\" \"] direction LR PatchModesExtractor(\"PatchModesExtractor
            ModesExtractor\") --> MethodModesExtractor(\"
            MethodModesExtractor\") end The IntermediateCreateExtractor is responsible if requests try to create intermediate containers with a single request. E.g., a PUT request to /foo/bar/baz should create both the /foo/ and /foo/bar/ containers in case they do not exist yet. This extractor makes sure that create permissions are also checked on those containers. Modes can usually be determined based on just the HTTP methods, which is what the MethodModesExtractor does. A GET request will always need the read mode for example. The only exception are PATCH requests, where the necessary modes depend on the body and the PATCH type. flowchart TD PatchModesExtractor(\"PatchModesExtractor
            WaterfallHandler\") --> PatchModesExtractorArgs subgraph PatchModesExtractorArgs[\" \"] N3PatchModesExtractor(\"
            N3PatchModesExtractor\") SparqlUpdateModesExtractor(\"
            SparqlUpdateModesExtractor\") end The server supports both N3 Patch and SPARQL Update PATCH requests. In both cases it will parse the bodies to determine what the impact would be of the request and what modes it requires.","title":"Modes extraction"},{"location":"architecture/features/protocol/authorization/#permission-reading","text":"PermissionReaders take the input of the above to determine which permissions are available for which credentials. The modes from the previous step are not yet needed, but can be used as optimization as we only need to know if we have permission on those modes. Each reader returns all the information it can find based on the resources and modes it receives. In the default configuration the following readers are combined when WebACL is enabled as authorization method. In case authorization is disabled by changing the authorization import to config/ldp/authorization/allow-all.json , this diagram is just a class that always returns all permissions. flowchart TD PermissionReader(\"PermissionReader
            AuxiliaryReader\") PermissionReader --> UnionPermissionReader(\"
            UnionPermissionReader\") UnionPermissionReader --> UnionPermissionReaderArgs subgraph UnionPermissionReaderArgs[\" \"] PathBasedReader(\"PathBasedReader
            PathBasedReader\") OwnerPermissionReader(\"OwnerPermissionReader
            OwnerPermissionReader\") WrappedWebAclReader(\"WrappedWebAclReader
            ParentContainerReader\") end WrappedWebAclReader --> WebAclAuxiliaryReader(\"WebAclAuxiliaryReader
            WebAclAuxiliaryReader\") WebAclAuxiliaryReader --> WebAclReader(\"WebAclReader
            WebAclReader\") The first thing that happens is that if the target is an auxiliary resource that uses the authorization of its subject resource, the AuxiliaryReader inserts that identifier instead. An example of this is if the requests targets the metadata of a resource. The UnionPermissionReader then combines the results of its readers into a single permission object. If one reader rejects a specific mode and another allows it, the rejection takes priority. The PathBasedReader rejects all permissions for certain paths. This is used to prevent access to the internal data of the server. The OwnerPermissionReader makes sure owners always have control access to the pods they created on the server . Users will always be able to modify the ACL resources in their pod, even if they accidentally removed their own access. The final readers are specifically relevant for the WebACL algorithm. The ParentContainerReader checks the permissions on a parent resource if required: creating a resource requires append permissions on the parent container, while deleting a resource requires write permissions there. In case the target is an ACL resource, control permissions need to be checked, no matter what mode was generated by the ModesExtractor . The WebAclAuxiliaryReader makes sure this conversion happens. Finally, the WebAclReader implements the efffective ACL resource algorithm and returns the permissions it finds in that resource. In case no ACL resource is found this indicates a configuration error and no permissions will be granted.","title":"Permission reading"},{"location":"architecture/features/protocol/authorization/#authorization_1","text":"All the results of the previous steps then get combined in the PermissionBasedAuthorizer to either allow or reject a request. If no permissions are found for a requested mode, or they are explicitly forbidden, a 401/403 will be returned, depending on if the agent was logged in or not.","title":"Authorization"},{"location":"architecture/features/protocol/overview/","text":"Solid protocol \u00b6 The LdpHandler , named as a reference to the Linked Data Platform specification, chains several handlers together, each with their own specific purpose, to fully resolve the HTTP request. It specifically handles Solid requests as described in the protocol specification , e.g. a POST request to create a new resource. Below is a simplified view of how these handlers are linked. flowchart LR LdpHandler(\"LdpHandler
            ParsingHttphandler\") LdpHandler --> AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") AuthorizingHttpHandler --> OperationHandler(\"OperationHandler
            OperationHandler\") OperationHandler --> ResourceStore(\"ResourceStore
            ResourceStore\") A standard request would go through the following steps: The ParsingHttphandler parses the HTTP request into a manageable format, both body and metadata such as headers. The AuthorizingHttpHandler verifies if the request is authorized to access the targeted resource. The OperationHandler determines which action is required based on the HTTP method. The ResourceStore does all the relevant data work. The ParsingHttphandler eventually receives the response data, or an error, and handles the output. Below are sections that go deeper into the specific steps. How input gets parsed and output gets returned How authentication and authorization work What the ResourceStore looks like","title":"Overview"},{"location":"architecture/features/protocol/overview/#solid-protocol","text":"The LdpHandler , named as a reference to the Linked Data Platform specification, chains several handlers together, each with their own specific purpose, to fully resolve the HTTP request. It specifically handles Solid requests as described in the protocol specification , e.g. a POST request to create a new resource. Below is a simplified view of how these handlers are linked. flowchart LR LdpHandler(\"LdpHandler
            ParsingHttphandler\") LdpHandler --> AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") AuthorizingHttpHandler --> OperationHandler(\"OperationHandler
            OperationHandler\") OperationHandler --> ResourceStore(\"ResourceStore
            ResourceStore\") A standard request would go through the following steps: The ParsingHttphandler parses the HTTP request into a manageable format, both body and metadata such as headers. The AuthorizingHttpHandler verifies if the request is authorized to access the targeted resource. The OperationHandler determines which action is required based on the HTTP method. The ResourceStore does all the relevant data work. The ParsingHttphandler eventually receives the response data, or an error, and handles the output. Below are sections that go deeper into the specific steps. How input gets parsed and output gets returned How authentication and authorization work What the ResourceStore looks like","title":"Solid protocol"},{"location":"architecture/features/protocol/parsing/","text":"Parsing and responding to HTTP requests \u00b6 flowchart TD ParsingHttphandler(\"
            ParsingHttphandler\") ParsingHttphandler --> ParsingHttphandlerArgs subgraph ParsingHttphandlerArgs[\" \"] RequestParser(\"RequestParser
            BasicRequestParser\") AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") ErrorHandler(\"ErrorHandler
            ErrorHandler\") ResponseWriter(\"ResponseWriter
            BasicResponseWriter\") end A ParsingHttpHandler handles both the parsing of the input data, and the serializing of the output data. It follows these 3 steps: Use the RequestParser to convert the incoming data into an Operation . Send the Operation to the AuthorizingHttpHandler to receive either a Representation if the operation was a success, or an Error in case something went wrong. In case of an error the ErrorHandler will convert the Error into a ResponseDescription . Use the ResponseWriter to output the ResponseDescription as an HTTP response. Parsing the request \u00b6 flowchart TD RequestParser(\"RequestParser
            BasicRequestParser\") --> RequestParserArgs subgraph RequestParserArgs[\" \"] TargetExtractor(\"TargetExtractor
            OriginalUrlExtractor\") PreferenceParser(\"PreferenceParser
            AcceptPreferenceParser\") MetadataParser(\"MetadataParser
            MetadataParser\") BodyParser(\"
            Bodyparser\") Conditions(\"
            BasicConditionsParser\") end OriginalUrlExtractor --> IdentifierStrategy(\"IdentifierStrategy
            IdentifierStrategy\") The BasicRequestParser is mostly an aggregator of multiple smaller parsers that each handle a very specific part. URL \u00b6 This is a single class, the OriginalUrlExtractor , but fulfills the very important role of making sure input URLs are handled consistently. The query parameters will always be completely removed from the URL. There is also an algorithm to make sure all URLs have a \"canonical\" version as for example both & and %26 can be interpreted in the same way. Specifically all special characters will be encoded into their percent encoding. The IdentifierStrategy it gets as input is used to determine if the resulting URL is within the scope of the server. This can differ depending on if the server uses subdomains or not. The resulting identifier will be stored in the target field of an Operation object. Preferences \u00b6 The AcceptPreferenceParser parses the Accept header and all the relevant Accept-* headers. These will all be put into the preferences field of an Operation object. These will later be used to handle the content negotiation. For example, when sending an Accept: text/turtle; q=0.9 header, this wil result in the preferences object { type: { 'text/turtle': 0.9 } } . Headers \u00b6 Several other headers can have relevant metadata, such as the Content-Type header, or the Link: ; rel=\"type\" header which is used to indicate to the server that a request intends to create a container. Such headers are converted to RDF triples and stored in the RepresentationMetadata object, which will be part of the body field in the Operation . The default MetadataParser is a ParallelHandler that contains several smaller parsers, each looking at a specific header. Body \u00b6 In case of most requests, the input data stream is used directly in the body field of the Operation , with a few minor checks to make sure the HTTP specification is being followed. In the case of PATCH requests though, there are several specific body parsers that will convert the request into a JavaScript object containing all the necessary information to execute such a PATCH. Several validation checks will already take place there as well. Conditions \u00b6 The BasicConditionsParser parses everything related to conditions headers, such as if-none-match or if-modified-since , and stores the relevant information in the conditions field of the Operation . These will later be used to make sure the request should be aborted or not. Sending the response \u00b6 In case a request is successful, the AuthorizingHttpHandler will return a ResponseDescription , and if not it will throw an error. In case an error gets thrown, this will be caught by the ErrorHandler and converted into a ResponseDescription . The request preferences will be used to make sure the serialization is one that is preferred. Either way we will have a ResponseDescription , which will be sent to the BasicResponseWriter to convert into output headers, data and a status code. To convert the metadata into headers, it uses a MetadataWriter , which functions as the reverse of the MetadataParser mentioned above: it has multiple writers which each convert certain metadata into a specific header.","title":"Parsing"},{"location":"architecture/features/protocol/parsing/#parsing-and-responding-to-http-requests","text":"flowchart TD ParsingHttphandler(\"
            ParsingHttphandler\") ParsingHttphandler --> ParsingHttphandlerArgs subgraph ParsingHttphandlerArgs[\" \"] RequestParser(\"RequestParser
            BasicRequestParser\") AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") ErrorHandler(\"ErrorHandler
            ErrorHandler\") ResponseWriter(\"ResponseWriter
            BasicResponseWriter\") end A ParsingHttpHandler handles both the parsing of the input data, and the serializing of the output data. It follows these 3 steps: Use the RequestParser to convert the incoming data into an Operation . Send the Operation to the AuthorizingHttpHandler to receive either a Representation if the operation was a success, or an Error in case something went wrong. In case of an error the ErrorHandler will convert the Error into a ResponseDescription . Use the ResponseWriter to output the ResponseDescription as an HTTP response.","title":"Parsing and responding to HTTP requests"},{"location":"architecture/features/protocol/parsing/#parsing-the-request","text":"flowchart TD RequestParser(\"RequestParser
            BasicRequestParser\") --> RequestParserArgs subgraph RequestParserArgs[\" \"] TargetExtractor(\"TargetExtractor
            OriginalUrlExtractor\") PreferenceParser(\"PreferenceParser
            AcceptPreferenceParser\") MetadataParser(\"MetadataParser
            MetadataParser\") BodyParser(\"
            Bodyparser\") Conditions(\"
            BasicConditionsParser\") end OriginalUrlExtractor --> IdentifierStrategy(\"IdentifierStrategy
            IdentifierStrategy\") The BasicRequestParser is mostly an aggregator of multiple smaller parsers that each handle a very specific part.","title":"Parsing the request"},{"location":"architecture/features/protocol/parsing/#url","text":"This is a single class, the OriginalUrlExtractor , but fulfills the very important role of making sure input URLs are handled consistently. The query parameters will always be completely removed from the URL. There is also an algorithm to make sure all URLs have a \"canonical\" version as for example both & and %26 can be interpreted in the same way. Specifically all special characters will be encoded into their percent encoding. The IdentifierStrategy it gets as input is used to determine if the resulting URL is within the scope of the server. This can differ depending on if the server uses subdomains or not. The resulting identifier will be stored in the target field of an Operation object.","title":"URL"},{"location":"architecture/features/protocol/parsing/#preferences","text":"The AcceptPreferenceParser parses the Accept header and all the relevant Accept-* headers. These will all be put into the preferences field of an Operation object. These will later be used to handle the content negotiation. For example, when sending an Accept: text/turtle; q=0.9 header, this wil result in the preferences object { type: { 'text/turtle': 0.9 } } .","title":"Preferences"},{"location":"architecture/features/protocol/parsing/#headers","text":"Several other headers can have relevant metadata, such as the Content-Type header, or the Link: ; rel=\"type\" header which is used to indicate to the server that a request intends to create a container. Such headers are converted to RDF triples and stored in the RepresentationMetadata object, which will be part of the body field in the Operation . The default MetadataParser is a ParallelHandler that contains several smaller parsers, each looking at a specific header.","title":"Headers"},{"location":"architecture/features/protocol/parsing/#body","text":"In case of most requests, the input data stream is used directly in the body field of the Operation , with a few minor checks to make sure the HTTP specification is being followed. In the case of PATCH requests though, there are several specific body parsers that will convert the request into a JavaScript object containing all the necessary information to execute such a PATCH. Several validation checks will already take place there as well.","title":"Body"},{"location":"architecture/features/protocol/parsing/#conditions","text":"The BasicConditionsParser parses everything related to conditions headers, such as if-none-match or if-modified-since , and stores the relevant information in the conditions field of the Operation . These will later be used to make sure the request should be aborted or not.","title":"Conditions"},{"location":"architecture/features/protocol/parsing/#sending-the-response","text":"In case a request is successful, the AuthorizingHttpHandler will return a ResponseDescription , and if not it will throw an error. In case an error gets thrown, this will be caught by the ErrorHandler and converted into a ResponseDescription . The request preferences will be used to make sure the serialization is one that is preferred. Either way we will have a ResponseDescription , which will be sent to the BasicResponseWriter to convert into output headers, data and a status code. To convert the metadata into headers, it uses a MetadataWriter , which functions as the reverse of the MetadataParser mentioned above: it has multiple writers which each convert certain metadata into a specific header.","title":"Sending the response"},{"location":"architecture/features/protocol/resource-store/","text":"Resource store \u00b6 The interface of a ResourceStore is mostly a 1-to-1 mapping of the HTTP methods: GET: getRepresentation PUT: setRepresentation POST: addResource DELETE: deleteResource PATCH: modifyResource The corresponding OperationHandler of the relevant method is responsible for calling the correct ResourceStore function. In practice, the community server has multiple resource stores chained together, each handling a specific part of the request and then calling the next store in the chain. The default configurations come with the following stores: MonitoringStore IndexRepresentationStore LockingResourceStore PatchingStore RepresentationConvertingStore DataAccessorBasedStore This chain can be seen in the configuration part in config/storage/middleware/default.json and all the entries in config/storage/backend . MonitoringStore \u00b6 This store emits the events that are necessary to emit notifications when resources change. There are 4 different events that can be emitted: - this.emit('changed', identifier, activity) : is emitted for every resource that was changed/effected by a call to the store. With activity being undefined or one of the available ActivityStream terms. - this.emit(AS.Create, identifier) : is emitted for every resource that was created by the call to the store. - this.emit(AS.Update, identifier) : is emitted for every resource that was updated by the call to the store. - this.emit(AS.Delete, identifier) : is emitted for every resource that was deleted by the call to the store. A changed event will always be emitted if a resource was changed. If the correct metadata was set by the source ResourceStore , an additional field will be sent along indicating the type of change, and an additional corresponding event will be emitted, depending on what the change is. IndexRepresentationStore \u00b6 When doing a GET request on a container /container/ , this container returns the contents of /container/index.html instead if HTML is the preferred response type. All these values are the defaults and can be configured for other resources and media types. LockingResourceStore \u00b6 To prevent data corruption, the server locks resources when being targeted by a request. Locks are only released when an operation is completely finished, in the case of read operations this means the entire data stream is read, and in the case of write operations this happens when all relevant data is written. The default lock that is used is a readers-writer lock. This allows simultaneous read requests on the same resource, but only while no write request is in progress. PatchingStore \u00b6 PATCH operations in Solid apply certain transformations on the target resource, which makes them more complicated than only reading or writing data since it involves both. The PatchingStore provides a generic solution for backends that do not implement the modifyResource function so new backends can be added more easily. In case the next store in the chain does not support PATCH, the PatchingStore will GET the data from the next store, apply the transformation on that data, and then PUT it back to the store. RepresentationConvertingStore \u00b6 This store handles everything related to content negotiation. In case the resulting data of a GET request does not match the preferences of a request, it will be converted here. Similarly, if incoming data does not match the type expected by the store, the SPARQL backend only accepts triples for example, that is also handled here DataAccessorBasedStore \u00b6 Large parts of the requirements of the Solid protocol specification are resolved by the DataAccessorBasedStore : POST only working on containers, DELETE not working on non-empty containers, generating ldp:contains triples for containers, etc. Most of this behaviour is independent of how the data is stored which is why it can be generalized here. The store's name comes from the fact that it makes use of DataAccessor s to handle the read/write of resources. A DataAccessor is a simple interface that only focuses on handling the data. It does not concern itself with any of the necessary Solid checks as it assumes those have already been made. This means that if a storage method needs to be supported, only a new DataAccessor needs to be made, after which it can be plugged into the rest of the server.","title":"Resource Store"},{"location":"architecture/features/protocol/resource-store/#resource-store","text":"The interface of a ResourceStore is mostly a 1-to-1 mapping of the HTTP methods: GET: getRepresentation PUT: setRepresentation POST: addResource DELETE: deleteResource PATCH: modifyResource The corresponding OperationHandler of the relevant method is responsible for calling the correct ResourceStore function. In practice, the community server has multiple resource stores chained together, each handling a specific part of the request and then calling the next store in the chain. The default configurations come with the following stores: MonitoringStore IndexRepresentationStore LockingResourceStore PatchingStore RepresentationConvertingStore DataAccessorBasedStore This chain can be seen in the configuration part in config/storage/middleware/default.json and all the entries in config/storage/backend .","title":"Resource store"},{"location":"architecture/features/protocol/resource-store/#monitoringstore","text":"This store emits the events that are necessary to emit notifications when resources change. There are 4 different events that can be emitted: - this.emit('changed', identifier, activity) : is emitted for every resource that was changed/effected by a call to the store. With activity being undefined or one of the available ActivityStream terms. - this.emit(AS.Create, identifier) : is emitted for every resource that was created by the call to the store. - this.emit(AS.Update, identifier) : is emitted for every resource that was updated by the call to the store. - this.emit(AS.Delete, identifier) : is emitted for every resource that was deleted by the call to the store. A changed event will always be emitted if a resource was changed. If the correct metadata was set by the source ResourceStore , an additional field will be sent along indicating the type of change, and an additional corresponding event will be emitted, depending on what the change is.","title":"MonitoringStore"},{"location":"architecture/features/protocol/resource-store/#indexrepresentationstore","text":"When doing a GET request on a container /container/ , this container returns the contents of /container/index.html instead if HTML is the preferred response type. All these values are the defaults and can be configured for other resources and media types.","title":"IndexRepresentationStore"},{"location":"architecture/features/protocol/resource-store/#lockingresourcestore","text":"To prevent data corruption, the server locks resources when being targeted by a request. Locks are only released when an operation is completely finished, in the case of read operations this means the entire data stream is read, and in the case of write operations this happens when all relevant data is written. The default lock that is used is a readers-writer lock. This allows simultaneous read requests on the same resource, but only while no write request is in progress.","title":"LockingResourceStore"},{"location":"architecture/features/protocol/resource-store/#patchingstore","text":"PATCH operations in Solid apply certain transformations on the target resource, which makes them more complicated than only reading or writing data since it involves both. The PatchingStore provides a generic solution for backends that do not implement the modifyResource function so new backends can be added more easily. In case the next store in the chain does not support PATCH, the PatchingStore will GET the data from the next store, apply the transformation on that data, and then PUT it back to the store.","title":"PatchingStore"},{"location":"architecture/features/protocol/resource-store/#representationconvertingstore","text":"This store handles everything related to content negotiation. In case the resulting data of a GET request does not match the preferences of a request, it will be converted here. Similarly, if incoming data does not match the type expected by the store, the SPARQL backend only accepts triples for example, that is also handled here","title":"RepresentationConvertingStore"},{"location":"architecture/features/protocol/resource-store/#dataaccessorbasedstore","text":"Large parts of the requirements of the Solid protocol specification are resolved by the DataAccessorBasedStore : POST only working on containers, DELETE not working on non-empty containers, generating ldp:contains triples for containers, etc. Most of this behaviour is independent of how the data is stored which is why it can be generalized here. The store's name comes from the fact that it makes use of DataAccessor s to handle the read/write of resources. A DataAccessor is a simple interface that only focuses on handling the data. It does not concern itself with any of the necessary Solid checks as it assumes those have already been made. This means that if a storage method needs to be supported, only a new DataAccessor needs to be made, after which it can be plugged into the rest of the server.","title":"DataAccessorBasedStore"},{"location":"contributing/making-changes/","text":"Pull requests \u00b6 The community server is fully written in Typescript . All changes should be done through pull requests . We recommend first discussing a possible solution in the relevant issue to reduce the amount of changes that will be requested. In case any of your changes are breaking, make sure you target the next major branch ( versions/x.0.0 ) instead of the main branch. Breaking changes include: changing interface/class signatures, potentially breaking external custom configurations, and breaking how internal data is stored. In case of doubt you probably want to target the next major branch. We make use of Conventional Commits . Don't forget to update the release notes when adding new major features. Also update any relevant documentation in case this is needed. When making changes to a pull request, we prefer to update the existing commits with a rebase instead of appending new commits, this way the PR can be rebased directly onto the target branch instead of needing to be squashed. There are strict requirements from the linter and the test coverage before a PR is valid. These are configured to run automatically when trying to commit to git. Although there are no tests for it (yet), we strongly advice documenting with TSdoc . If a list of entries is alphabetically sorted, such as index.ts , make sure it stays that way.","title":"Pull requests"},{"location":"contributing/making-changes/#pull-requests","text":"The community server is fully written in Typescript . All changes should be done through pull requests . We recommend first discussing a possible solution in the relevant issue to reduce the amount of changes that will be requested. In case any of your changes are breaking, make sure you target the next major branch ( versions/x.0.0 ) instead of the main branch. Breaking changes include: changing interface/class signatures, potentially breaking external custom configurations, and breaking how internal data is stored. In case of doubt you probably want to target the next major branch. We make use of Conventional Commits . Don't forget to update the release notes when adding new major features. Also update any relevant documentation in case this is needed. When making changes to a pull request, we prefer to update the existing commits with a rebase instead of appending new commits, this way the PR can be rebased directly onto the target branch instead of needing to be squashed. There are strict requirements from the linter and the test coverage before a PR is valid. These are configured to run automatically when trying to commit to git. Although there are no tests for it (yet), we strongly advice documenting with TSdoc . If a list of entries is alphabetically sorted, such as index.ts , make sure it stays that way.","title":"Pull requests"},{"location":"contributing/release/","text":"Releasing a new version \u00b6 This is only relevant if you are a developer with push access responsible for doing a new release. Steps to follow: Merge main into versions/x.0.0 . Verify if there are issues when upgrading an existing installation to the new version. Can the data still be accessed? Does authentication still work? Is there an issue upgrading any of the dependent repositories (see below for links)? None of the above has to be blocking per se, but should be noted in the release notes if relevant. Verify that the RELEASE_NOTES.md are correct. npm run release -- -r major Automatically updates Components.js references to the new version. Committed with chore(release): Update configs to vx.0.0 . Updates the package.json , and generates the new entries in CHANGELOG.md . Commited with chore(release): Release version vx.0.0 of the npm package Optionally run npx commit-and-tag-version -r major --dry-run to preview the commands that will be run and the changes to CHANGELOG.md . The postrelease script will now prompt you to manually edit the CHANGELOG.md . All entries are added in separate sections of the new release according to their commit prefixes. Re-organize the entries accordingly, referencing previous releases. Most of the entries in Chores and Documentation can be removed. Press any key in your terminal when your changes are ready. The postrelease script will amend the release commit, create an annotated tag and push changes to origin. Merge versions/x.0.0 into main and push. Do a GitHub release. npm publish Check if there is a next tag that needs to be replaced. Rename the versions/x.0.0 branch to the next version. Update .github/workflows/schedule.yml and .github/dependabot.yml to point at the new branch. Potentially upgrade dependent repositories: Recipes at https://github.com/CommunitySolidServer/recipes/ Tutorials at https://github.com/CommunitySolidServer/tutorials/ Changes when doing a pre-release of a major version: Version with npm run release -- -r major --prerelease alpha Do not merge versions/x.0.0 into main . Publish with npm publish --tag next . Do not update the branch or anything related.","title":"Releases"},{"location":"contributing/release/#releasing-a-new-version","text":"This is only relevant if you are a developer with push access responsible for doing a new release. Steps to follow: Merge main into versions/x.0.0 . Verify if there are issues when upgrading an existing installation to the new version. Can the data still be accessed? Does authentication still work? Is there an issue upgrading any of the dependent repositories (see below for links)? None of the above has to be blocking per se, but should be noted in the release notes if relevant. Verify that the RELEASE_NOTES.md are correct. npm run release -- -r major Automatically updates Components.js references to the new version. Committed with chore(release): Update configs to vx.0.0 . Updates the package.json , and generates the new entries in CHANGELOG.md . Commited with chore(release): Release version vx.0.0 of the npm package Optionally run npx commit-and-tag-version -r major --dry-run to preview the commands that will be run and the changes to CHANGELOG.md . The postrelease script will now prompt you to manually edit the CHANGELOG.md . All entries are added in separate sections of the new release according to their commit prefixes. Re-organize the entries accordingly, referencing previous releases. Most of the entries in Chores and Documentation can be removed. Press any key in your terminal when your changes are ready. The postrelease script will amend the release commit, create an annotated tag and push changes to origin. Merge versions/x.0.0 into main and push. Do a GitHub release. npm publish Check if there is a next tag that needs to be replaced. Rename the versions/x.0.0 branch to the next version. Update .github/workflows/schedule.yml and .github/dependabot.yml to point at the new branch. Potentially upgrade dependent repositories: Recipes at https://github.com/CommunitySolidServer/recipes/ Tutorials at https://github.com/CommunitySolidServer/tutorials/ Changes when doing a pre-release of a major version: Version with npm run release -- -r major --prerelease alpha Do not merge versions/x.0.0 into main . Publish with npm publish --tag next . Do not update the branch or anything related.","title":"Releasing a new version"},{"location":"usage/client-credentials/","text":"Automating authentication with Client Credentials \u00b6 One potential issue for scripts and other applications is that it requires user interaction to log in and authenticate. The CSS offers an alternative solution for such cases by making use of Client Credentials. Once you have created an account as described in the Identity Provider section , users can request a token that apps can use to authenticate without user input. All requests to the client credentials API currently require you to send along the email and password of that account to identify yourself. This is a temporary solution until the server has more advanced account management, after which this API will change. Below is example code of how to make use of these tokens. It makes use of several utility functions from the Solid Authentication Client . Note that the code below uses top-level await , which not all JavaScript engines support, so this should all be contained in an async function. Generating a token \u00b6 The code below generates a token linked to your account and WebID. This only needs to be done once, afterwards this token can be used for all future requests. import fetch from 'node-fetch' ; // This assumes your server is started under http://localhost:3000/. // This URL can also be found by checking the controls in JSON responses when interacting with the IDP API, // as described in the Identity Provider section. const response = await fetch ( 'http://localhost:3000/idp/credentials/' , { method : 'POST' , headers : { 'content-type' : 'application/json' }, // The email/password fields are those of your account. // The name field will be used when generating the ID of your token. body : JSON.stringify ({ email : 'my-email@example.com' , password : 'my-account-password' , name : 'my-token' }), }); // These are the identifier and secret of your token. // Store the secret somewhere safe as there is no way to request it again from the server! const { id , secret } = await response . json (); Requesting an Access token \u00b6 The ID and secret combination generated above can be used to request an Access Token from the server. This Access Token is only valid for a certain amount of time, after which a new one needs to be requested. import { createDpopHeader , generateDpopKeyPair } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // A key pair is needed for encryption. // This function from `solid-client-authn` generates such a pair for you. const dpopKey = await generateDpopKeyPair (); // These are the ID and secret generated in the previous step. // Both the ID and the secret need to be form-encoded. const authString = ` ${ encodeURIComponent ( id ) } : ${ encodeURIComponent ( secret ) } ` ; // This URL can be found by looking at the \"token_endpoint\" field at // http://localhost:3000/.well-known/openid-configuration // if your server is hosted at http://localhost:3000/. const tokenUrl = 'http://localhost:3000/.oidc/token' ; const response = await fetch ( tokenUrl , { method : 'POST' , headers : { // The header needs to be in base64 encoding. authorization : `Basic ${ Buffer . from ( authString ). toString ( 'base64' ) } ` , 'content-type' : 'application/x-www-form-urlencoded' , dpop : await createDpopHeader ( tokenUrl , 'POST' , dpopKey ), }, body : 'grant_type=client_credentials&scope=webid' , }); // This is the Access token that will be used to do an authenticated request to the server. // The JSON also contains an \"expires_in\" field in seconds, // which you can use to know when you need request a new Access token. const { access_token : accessToken } = await response . json (); Using the Access token to make an authenticated request \u00b6 Once you have an Access token, you can use it for authenticated requests until it expires. import { buildAuthenticatedFetch } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // The DPoP key needs to be the same key as the one used in the previous step. // The Access token is the one generated in the previous step. const authFetch = await buildAuthenticatedFetch ( fetch , accessToken , { dpopKey }); // authFetch can now be used as a standard fetch function that will authenticate as your WebID. // This request will do a simple GET for example. const response = await authFetch ( 'http://localhost:3000/private' ); Deleting a token \u00b6 You can see all your existing tokens by doing a POST to http://localhost:3000/idp/credentials/ with as body a JSON object containing your email and password. The response will be a JSON list containing all your tokens. Deleting a token requires also doing a POST to the same URL, but adding a delete key to the JSON input object with as value the ID of the token you want to remove.","title":"Client credentials"},{"location":"usage/client-credentials/#automating-authentication-with-client-credentials","text":"One potential issue for scripts and other applications is that it requires user interaction to log in and authenticate. The CSS offers an alternative solution for such cases by making use of Client Credentials. Once you have created an account as described in the Identity Provider section , users can request a token that apps can use to authenticate without user input. All requests to the client credentials API currently require you to send along the email and password of that account to identify yourself. This is a temporary solution until the server has more advanced account management, after which this API will change. Below is example code of how to make use of these tokens. It makes use of several utility functions from the Solid Authentication Client . Note that the code below uses top-level await , which not all JavaScript engines support, so this should all be contained in an async function.","title":"Automating authentication with Client Credentials"},{"location":"usage/client-credentials/#generating-a-token","text":"The code below generates a token linked to your account and WebID. This only needs to be done once, afterwards this token can be used for all future requests. import fetch from 'node-fetch' ; // This assumes your server is started under http://localhost:3000/. // This URL can also be found by checking the controls in JSON responses when interacting with the IDP API, // as described in the Identity Provider section. const response = await fetch ( 'http://localhost:3000/idp/credentials/' , { method : 'POST' , headers : { 'content-type' : 'application/json' }, // The email/password fields are those of your account. // The name field will be used when generating the ID of your token. body : JSON.stringify ({ email : 'my-email@example.com' , password : 'my-account-password' , name : 'my-token' }), }); // These are the identifier and secret of your token. // Store the secret somewhere safe as there is no way to request it again from the server! const { id , secret } = await response . json ();","title":"Generating a token"},{"location":"usage/client-credentials/#requesting-an-access-token","text":"The ID and secret combination generated above can be used to request an Access Token from the server. This Access Token is only valid for a certain amount of time, after which a new one needs to be requested. import { createDpopHeader , generateDpopKeyPair } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // A key pair is needed for encryption. // This function from `solid-client-authn` generates such a pair for you. const dpopKey = await generateDpopKeyPair (); // These are the ID and secret generated in the previous step. // Both the ID and the secret need to be form-encoded. const authString = ` ${ encodeURIComponent ( id ) } : ${ encodeURIComponent ( secret ) } ` ; // This URL can be found by looking at the \"token_endpoint\" field at // http://localhost:3000/.well-known/openid-configuration // if your server is hosted at http://localhost:3000/. const tokenUrl = 'http://localhost:3000/.oidc/token' ; const response = await fetch ( tokenUrl , { method : 'POST' , headers : { // The header needs to be in base64 encoding. authorization : `Basic ${ Buffer . from ( authString ). toString ( 'base64' ) } ` , 'content-type' : 'application/x-www-form-urlencoded' , dpop : await createDpopHeader ( tokenUrl , 'POST' , dpopKey ), }, body : 'grant_type=client_credentials&scope=webid' , }); // This is the Access token that will be used to do an authenticated request to the server. // The JSON also contains an \"expires_in\" field in seconds, // which you can use to know when you need request a new Access token. const { access_token : accessToken } = await response . json ();","title":"Requesting an Access token"},{"location":"usage/client-credentials/#using-the-access-token-to-make-an-authenticated-request","text":"Once you have an Access token, you can use it for authenticated requests until it expires. import { buildAuthenticatedFetch } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // The DPoP key needs to be the same key as the one used in the previous step. // The Access token is the one generated in the previous step. const authFetch = await buildAuthenticatedFetch ( fetch , accessToken , { dpopKey }); // authFetch can now be used as a standard fetch function that will authenticate as your WebID. // This request will do a simple GET for example. const response = await authFetch ( 'http://localhost:3000/private' );","title":"Using the Access token to make an authenticated request"},{"location":"usage/client-credentials/#deleting-a-token","text":"You can see all your existing tokens by doing a POST to http://localhost:3000/idp/credentials/ with as body a JSON object containing your email and password. The response will be a JSON list containing all your tokens. Deleting a token requires also doing a POST to the same URL, but adding a delete key to the JSON input object with as value the ID of the token you want to remove.","title":"Deleting a token"},{"location":"usage/example-requests/","text":"Interacting with the server \u00b6 PUT : Creating resources for a given URL \u00b6 Create a plain text file: curl -X PUT -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/myfile.txt Create a turtle file: curl -X PUT -H \"Content-Type: text/turtle\" \\ -d \" .\" \\ http://localhost:3000/myfile.ttl POST : Creating resources at a generated URL \u00b6 Create a plain text file: curl -X POST -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/ Create a turtle file: curl -X POST -H \"Content-Type: text/turtle\" \\ -d \" .\" \\ http://localhost:3000/ The response's Location header will contain the URL of the created resource. GET : Retrieving resources \u00b6 Retrieve a plain text file: curl -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt Retrieve a turtle file: curl -H \"Accept: text/turtle\" \\ http://localhost:3000/myfile.ttl Retrieve a turtle file in a different serialization: curl -H \"Accept: application/ld+json\" \\ http://localhost:3000/myfile.ttl DELETE : Deleting resources \u00b6 curl -X DELETE http://localhost:3000/myfile.txt PATCH : Modifying resources \u00b6 Modify a resource using N3 Patch : curl -X PATCH -H \"Content-Type: text/n3\" \\ --data-raw \"@prefix solid: . _:rename a solid:InsertDeletePatch; solid:inserts { . }.\" \\ http://localhost:3000/myfile.ttl Modify a resource using SPARQL Update : curl -X PATCH -H \"Content-Type: application/sparql-update\" \\ -d \"INSERT DATA { }\" \\ http://localhost:3000/myfile.ttl HEAD : Retrieve resources headers \u00b6 curl -I -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt OPTIONS : Retrieve resources communication options \u00b6 curl -X OPTIONS -i http://localhost:3000/myfile.txt","title":"Example request"},{"location":"usage/example-requests/#interacting-with-the-server","text":"","title":"Interacting with the server"},{"location":"usage/example-requests/#put-creating-resources-for-a-given-url","text":"Create a plain text file: curl -X PUT -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/myfile.txt Create a turtle file: curl -X PUT -H \"Content-Type: text/turtle\" \\ -d \" .\" \\ http://localhost:3000/myfile.ttl","title":"PUT: Creating resources for a given URL"},{"location":"usage/example-requests/#post-creating-resources-at-a-generated-url","text":"Create a plain text file: curl -X POST -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/ Create a turtle file: curl -X POST -H \"Content-Type: text/turtle\" \\ -d \" .\" \\ http://localhost:3000/ The response's Location header will contain the URL of the created resource.","title":"POST: Creating resources at a generated URL"},{"location":"usage/example-requests/#get-retrieving-resources","text":"Retrieve a plain text file: curl -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt Retrieve a turtle file: curl -H \"Accept: text/turtle\" \\ http://localhost:3000/myfile.ttl Retrieve a turtle file in a different serialization: curl -H \"Accept: application/ld+json\" \\ http://localhost:3000/myfile.ttl","title":"GET: Retrieving resources"},{"location":"usage/example-requests/#delete-deleting-resources","text":"curl -X DELETE http://localhost:3000/myfile.txt","title":"DELETE: Deleting resources"},{"location":"usage/example-requests/#patch-modifying-resources","text":"Modify a resource using N3 Patch : curl -X PATCH -H \"Content-Type: text/n3\" \\ --data-raw \"@prefix solid: . _:rename a solid:InsertDeletePatch; solid:inserts { . }.\" \\ http://localhost:3000/myfile.ttl Modify a resource using SPARQL Update : curl -X PATCH -H \"Content-Type: application/sparql-update\" \\ -d \"INSERT DATA { }\" \\ http://localhost:3000/myfile.ttl","title":"PATCH: Modifying resources"},{"location":"usage/example-requests/#head-retrieve-resources-headers","text":"curl -I -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt","title":"HEAD: Retrieve resources headers"},{"location":"usage/example-requests/#options-retrieve-resources-communication-options","text":"curl -X OPTIONS -i http://localhost:3000/myfile.txt","title":"OPTIONS: Retrieve resources communication options"},{"location":"usage/identity-provider/","text":"Identity Provider \u00b6 Besides implementing the Solid protocol , the community server can also be an Identity Provider (IDP), officially known as an OpenID Provider (OP), following the Solid OIDC spec as much as possible. It is recommended to use the latest version of the Solid authentication client to interact with the server. The links here assume the server is hosted at http://localhost:3000/ . Registering an account \u00b6 To register an account, you can go to http://localhost:3000/idp/register/ if this feature is enabled, which it is on all configurations we provide. Currently our registration page ties 3 features together on the same page: Creating an account on the server. Creating or linking a WebID to your account. Creating a pod on the server. Account \u00b6 To create an account you need to provide an email address and password. The password will be salted and hashed before being stored. As of now, the account is only used to log in and identify yourself to the IDP when you want to do an authenticated request, but in future the plan is to also use this for account/pod management. WebID \u00b6 We require each account to have a corresponding WebID. You can either let the server create a WebID for you in a pod, which will also need to be created then, or you can link an already existing WebID you have on an external server. In case you try to link your own WebID, you can choose if you want to be able to use this server as your IDP for this WebID. If not, you can still create a pod, but you will not be able to direct the authentication client to this server to identify yourself. Additionally, if you try to register with an external WebID, the first attempt will return an error indicating you need to add an identification triple to your WebID. After doing that you can try to register again. This is how we verify you are the owner of that WebID. After registration the next page will inform you that you have to add an additional triple to your WebID if you want to use the server as your IDP. All of the above is automated if you create the WebID on the server itself. Pod \u00b6 To create a pod you simply have to fill in the name you want your pod to have. This will then be used to generate the full URL of your pod. For example, if you choose the name test , your pod would be located at http://localhost:3000/test/ and your generated WebID would be http://localhost:3000/test/profile/card#me . The generated name also depends on the configuration you chose for your server. If you are using the subdomain feature, such as being done in the config/memory-subdomains.json configuration, the generated pod URL would be http://test.localhost:3000/ . Logging in \u00b6 When using an authenticating client, you will be redirected to a login screen asking for your email and password. After that you will be redirected to a page showing some basic information about the client. There you need to consent that this client is allowed to identify using your WebID. As a result the server will send a token back to the client that contains all the information needed to use your WebID. Forgot password \u00b6 If you forgot your password, you can recover it by going to http://localhost:3000/idp/forgotpassword/ . There you can enter your email address to get a recovery mail to reset your password. This feature only works if a mail server was configured, which by default is not the case. JSON API \u00b6 All of the above happens through HTML pages provided by the server. By default, the server uses the templates found in /templates/identity/email-password/ but different templates can be used through configuration. These templates all make use of a JSON API exposed by the server. For example, when doing a GET request to http://localhost:3000/idp/register/ with a JSON accept header, the following JSON is returned: { \"required\" : { \"email\" : \"string\" , \"password\" : \"string\" , \"confirmPassword\" : \"string\" , \"createWebId\" : \"boolean\" , \"register\" : \"boolean\" , \"createPod\" : \"boolean\" , \"rootPod\" : \"boolean\" }, \"optional\" : { \"webId\" : \"string\" , \"podName\" : \"string\" , \"template\" : \"string\" }, \"controls\" : { \"register\" : \"http://localhost:3000/idp/register/\" , \"index\" : \"http://localhost:3000/idp/\" , \"prompt\" : \"http://localhost:3000/idp/prompt/\" , \"login\" : \"http://localhost:3000/idp/login/\" , \"forgotPassword\" : \"http://localhost:3000/idp/forgotpassword/\" }, \"apiVersion\" : \"0.3\" } The required and optional fields indicate which input fields are expected by the API. These correspond to the fields of the HTML registration page. To register a user, you can do a POST request with a JSON body containing the correct fields: { \"email\" : \"test@example.com\" , \"password\" : \"secret\" , \"confirmPassword\" : \"secret\" , \"createWebId\" : true , \"register\" : true , \"createPod\" : true , \"rootPod\" : false , \"podName\" : \"test\" } Two fields here that are not covered on the HTML page above are rootPod and template . rootPod tells the server to put the pod in the root of the server instead of a location based on the podName . By default the server will reject requests where this is true , except during setup. template is only used by servers running the config/dynamic.json configuration, which is a very custom setup where every pod can have a different Components.js configuration, so this value can usually be ignored. IDP configuration \u00b6 The above descriptions cover server behaviour with most default configurations, but just like any other feature, there are several features that can be changed through the imports in your configuration file. All available options can be found in the config/identity/ folder . Below we go a bit deeper into the available options access \u00b6 The access option allows you to set authorization restrictions on the IDP API when enabled, similar to how authorization works on the LDP requests on the server. For example, if the server uses WebACL as authorization scheme, you can put a .acl resource in the /idp/register/ container to restrict who is allowed to access the registration API. Note that for everything to work there needs to be a .acl resource in /idp/ when using WebACL so resources can be accessed as usual when the server starts up. Make sure you change the permissions on /idp/.acl so not everyone can modify those. All of the above is only relevant if you use the restricted.json setting for this import. When you use public.json the API will simply always be accessible by everyone. email \u00b6 In case you want users to be able to reset their password when they forget it, you will need to tell the server which email server to use to send reset mails. example.json contains an example of what this looks like, which you will need to copy over to your base configuration and then remove the config/identity/email import. handler \u00b6 There is only one option here. This import contains all the core components necessary to make the IDP work. In case you need to make some changes to core IDP settings, this is where you would have to look. pod \u00b6 The pod options determines how pods are created. static.json is the expected pod behaviour as described above. dynamic.json is an experimental feature that allows users to have a custom Components.js configuration for their own pod. When using such a setup, a JSON file will be written containing all the information of the user pods so they can be recreated when the server restarts. registration \u00b6 This setting allows you to enable/disable registration on the server. Disabling registration here does not disable registration during setup, meaning you can still use this server as an IDP with the account created there.","title":"Identity provider"},{"location":"usage/identity-provider/#identity-provider","text":"Besides implementing the Solid protocol , the community server can also be an Identity Provider (IDP), officially known as an OpenID Provider (OP), following the Solid OIDC spec as much as possible. It is recommended to use the latest version of the Solid authentication client to interact with the server. The links here assume the server is hosted at http://localhost:3000/ .","title":"Identity Provider"},{"location":"usage/identity-provider/#registering-an-account","text":"To register an account, you can go to http://localhost:3000/idp/register/ if this feature is enabled, which it is on all configurations we provide. Currently our registration page ties 3 features together on the same page: Creating an account on the server. Creating or linking a WebID to your account. Creating a pod on the server.","title":"Registering an account"},{"location":"usage/identity-provider/#account","text":"To create an account you need to provide an email address and password. The password will be salted and hashed before being stored. As of now, the account is only used to log in and identify yourself to the IDP when you want to do an authenticated request, but in future the plan is to also use this for account/pod management.","title":"Account"},{"location":"usage/identity-provider/#webid","text":"We require each account to have a corresponding WebID. You can either let the server create a WebID for you in a pod, which will also need to be created then, or you can link an already existing WebID you have on an external server. In case you try to link your own WebID, you can choose if you want to be able to use this server as your IDP for this WebID. If not, you can still create a pod, but you will not be able to direct the authentication client to this server to identify yourself. Additionally, if you try to register with an external WebID, the first attempt will return an error indicating you need to add an identification triple to your WebID. After doing that you can try to register again. This is how we verify you are the owner of that WebID. After registration the next page will inform you that you have to add an additional triple to your WebID if you want to use the server as your IDP. All of the above is automated if you create the WebID on the server itself.","title":"WebID"},{"location":"usage/identity-provider/#pod","text":"To create a pod you simply have to fill in the name you want your pod to have. This will then be used to generate the full URL of your pod. For example, if you choose the name test , your pod would be located at http://localhost:3000/test/ and your generated WebID would be http://localhost:3000/test/profile/card#me . The generated name also depends on the configuration you chose for your server. If you are using the subdomain feature, such as being done in the config/memory-subdomains.json configuration, the generated pod URL would be http://test.localhost:3000/ .","title":"Pod"},{"location":"usage/identity-provider/#logging-in","text":"When using an authenticating client, you will be redirected to a login screen asking for your email and password. After that you will be redirected to a page showing some basic information about the client. There you need to consent that this client is allowed to identify using your WebID. As a result the server will send a token back to the client that contains all the information needed to use your WebID.","title":"Logging in"},{"location":"usage/identity-provider/#forgot-password","text":"If you forgot your password, you can recover it by going to http://localhost:3000/idp/forgotpassword/ . There you can enter your email address to get a recovery mail to reset your password. This feature only works if a mail server was configured, which by default is not the case.","title":"Forgot password"},{"location":"usage/identity-provider/#json-api","text":"All of the above happens through HTML pages provided by the server. By default, the server uses the templates found in /templates/identity/email-password/ but different templates can be used through configuration. These templates all make use of a JSON API exposed by the server. For example, when doing a GET request to http://localhost:3000/idp/register/ with a JSON accept header, the following JSON is returned: { \"required\" : { \"email\" : \"string\" , \"password\" : \"string\" , \"confirmPassword\" : \"string\" , \"createWebId\" : \"boolean\" , \"register\" : \"boolean\" , \"createPod\" : \"boolean\" , \"rootPod\" : \"boolean\" }, \"optional\" : { \"webId\" : \"string\" , \"podName\" : \"string\" , \"template\" : \"string\" }, \"controls\" : { \"register\" : \"http://localhost:3000/idp/register/\" , \"index\" : \"http://localhost:3000/idp/\" , \"prompt\" : \"http://localhost:3000/idp/prompt/\" , \"login\" : \"http://localhost:3000/idp/login/\" , \"forgotPassword\" : \"http://localhost:3000/idp/forgotpassword/\" }, \"apiVersion\" : \"0.3\" } The required and optional fields indicate which input fields are expected by the API. These correspond to the fields of the HTML registration page. To register a user, you can do a POST request with a JSON body containing the correct fields: { \"email\" : \"test@example.com\" , \"password\" : \"secret\" , \"confirmPassword\" : \"secret\" , \"createWebId\" : true , \"register\" : true , \"createPod\" : true , \"rootPod\" : false , \"podName\" : \"test\" } Two fields here that are not covered on the HTML page above are rootPod and template . rootPod tells the server to put the pod in the root of the server instead of a location based on the podName . By default the server will reject requests where this is true , except during setup. template is only used by servers running the config/dynamic.json configuration, which is a very custom setup where every pod can have a different Components.js configuration, so this value can usually be ignored.","title":"JSON API"},{"location":"usage/identity-provider/#idp-configuration","text":"The above descriptions cover server behaviour with most default configurations, but just like any other feature, there are several features that can be changed through the imports in your configuration file. All available options can be found in the config/identity/ folder . Below we go a bit deeper into the available options","title":"IDP configuration"},{"location":"usage/identity-provider/#access","text":"The access option allows you to set authorization restrictions on the IDP API when enabled, similar to how authorization works on the LDP requests on the server. For example, if the server uses WebACL as authorization scheme, you can put a .acl resource in the /idp/register/ container to restrict who is allowed to access the registration API. Note that for everything to work there needs to be a .acl resource in /idp/ when using WebACL so resources can be accessed as usual when the server starts up. Make sure you change the permissions on /idp/.acl so not everyone can modify those. All of the above is only relevant if you use the restricted.json setting for this import. When you use public.json the API will simply always be accessible by everyone.","title":"access"},{"location":"usage/identity-provider/#email","text":"In case you want users to be able to reset their password when they forget it, you will need to tell the server which email server to use to send reset mails. example.json contains an example of what this looks like, which you will need to copy over to your base configuration and then remove the config/identity/email import.","title":"email"},{"location":"usage/identity-provider/#handler","text":"There is only one option here. This import contains all the core components necessary to make the IDP work. In case you need to make some changes to core IDP settings, this is where you would have to look.","title":"handler"},{"location":"usage/identity-provider/#pod_1","text":"The pod options determines how pods are created. static.json is the expected pod behaviour as described above. dynamic.json is an experimental feature that allows users to have a custom Components.js configuration for their own pod. When using such a setup, a JSON file will be written containing all the information of the user pods so they can be recreated when the server restarts.","title":"pod"},{"location":"usage/identity-provider/#registration","text":"This setting allows you to enable/disable registration on the server. Disabling registration here does not disable registration during setup, meaning you can still use this server as an IDP with the account created there.","title":"registration"},{"location":"usage/metadata/","text":"Editing metadata of resources \u00b6 What is a description resource \u00b6 Description resources contain auxiliary information about a resource. In CSS, these represent metadata corresponding to that resource. Every resource always has a corresponding description resource and therefore description resources can not be created or deleted directly. Description resources are discoverable by interacting with their subject resource: the response to a GET or HEAD request on a subject resource will contain a describedby Link Header with a URL that points to its description resource. Clients should always follow this link rather than guessing its URL, because the Solid Protocol does not mandate a specific description resource URL. The default CSS configurations use as a convention that http://example.org/resource has http://example.org/resource.meta as its description resource. How to edit the metadata of a resource \u00b6 Editing the metadata of a resource is performed by editing the description resource directly. This can only be done using PATCH requests (see example workflow ). PUT requests on description resources are not allowed, because they would replace the entire resource state, whereas some metadata is protected or generated by the server. Similarly, DELETE on description resources is not allowed because a resource will always have some metadata (e.g. rdf:type ). Instead, the lifecycle of description resources is managed by the server. Protected metadata \u00b6 Some metadata is managed by the server and can not be modified directly, such as the last modified date. The CSS will throw an error (409 ConflictHttpError ) when trying to change this protected metadata. Preserving metadata \u00b6 PUT requests on a resource will reset the description resource. There is however a way to keep the contents of description resource prior to the PUT request: adding the HTTP Link header targeting the description resource with rel=\"preserve\" . When the resource URL is http://localhost:3000/foobar , preserving its description resource when updating its contents can be achieved like in the following example: curl -X PUT 'http://localhost:3000/foobar' \\ -H 'Content-Type: text/turtle' \\ -H 'Link: ;rel=\"preserve\"' \\ -d \" .\" Impact on creating containers \u00b6 When creating a container the input body is ignored and performing a PUT request on an existing container will result in an error. Container metadata can only be added and modified by performing a PATCH on the description resource, similarly to documents. This is done to clearly differentiate between a container's representation and its metadata. Example of a workflow for editing a description resource \u00b6 In this example, we add an inbox description to http://localhost:3000/foo/ . This allows discovery of the ldp:inbox as described in the Linked Data Notifications specification . We have started the CSS with the default configuration and have already created an inbox at http://localhost:3000/inbox/ . Since we don't know the location of the description resource, we first send a HEAD request to the resource to obtain the URL of its description resource. curl --head 'http://localhost:3000/foo/' which will produce a response with at least these headers: HTTP/1.1 200 OK Link: ; rel = \"describedby\" Now that we have the URL of the description resource, we create a patch for adding the inbox in the description of the resource. curl -X PATCH 'http://localhost:3000/foo/.meta' \\ -H 'Content-Type: text/n3' \\ --data-raw '@prefix solid: . <> a solid:InsertDeletePatch; solid:inserts { . }.' After this update, we can verify that the inbox is added by performing a GET request to the description resource curl 'http://localhost:3000/foo/.meta' With as result for the body @prefix dc: . @prefix ldp: . @prefix posix: . @prefix xsd: . a ldp : Container , ldp : BasicContainer , ldp : Resource ; dc : modified \"2022-06-09T08:17:07.000Z\" ^^ xsd : dateTime ; ldp : inbox ;. This can also be verified by sending a GET request to the subject resource itself. The inbox location can also be found in the Link headers. curl -v 'http://localhost:3000/foo/' HTTP/1.1 200 OK Link: ; rel = \"http://www.w3.org/ns/ldp#inbox\"","title":"Metadata"},{"location":"usage/metadata/#editing-metadata-of-resources","text":"","title":"Editing metadata of resources"},{"location":"usage/metadata/#what-is-a-description-resource","text":"Description resources contain auxiliary information about a resource. In CSS, these represent metadata corresponding to that resource. Every resource always has a corresponding description resource and therefore description resources can not be created or deleted directly. Description resources are discoverable by interacting with their subject resource: the response to a GET or HEAD request on a subject resource will contain a describedby Link Header with a URL that points to its description resource. Clients should always follow this link rather than guessing its URL, because the Solid Protocol does not mandate a specific description resource URL. The default CSS configurations use as a convention that http://example.org/resource has http://example.org/resource.meta as its description resource.","title":"What is a description resource"},{"location":"usage/metadata/#how-to-edit-the-metadata-of-a-resource","text":"Editing the metadata of a resource is performed by editing the description resource directly. This can only be done using PATCH requests (see example workflow ). PUT requests on description resources are not allowed, because they would replace the entire resource state, whereas some metadata is protected or generated by the server. Similarly, DELETE on description resources is not allowed because a resource will always have some metadata (e.g. rdf:type ). Instead, the lifecycle of description resources is managed by the server.","title":"How to edit the metadata of a resource"},{"location":"usage/metadata/#protected-metadata","text":"Some metadata is managed by the server and can not be modified directly, such as the last modified date. The CSS will throw an error (409 ConflictHttpError ) when trying to change this protected metadata.","title":"Protected metadata"},{"location":"usage/metadata/#preserving-metadata","text":"PUT requests on a resource will reset the description resource. There is however a way to keep the contents of description resource prior to the PUT request: adding the HTTP Link header targeting the description resource with rel=\"preserve\" . When the resource URL is http://localhost:3000/foobar , preserving its description resource when updating its contents can be achieved like in the following example: curl -X PUT 'http://localhost:3000/foobar' \\ -H 'Content-Type: text/turtle' \\ -H 'Link: ;rel=\"preserve\"' \\ -d \" .\"","title":"Preserving metadata"},{"location":"usage/metadata/#impact-on-creating-containers","text":"When creating a container the input body is ignored and performing a PUT request on an existing container will result in an error. Container metadata can only be added and modified by performing a PATCH on the description resource, similarly to documents. This is done to clearly differentiate between a container's representation and its metadata.","title":"Impact on creating containers"},{"location":"usage/metadata/#example-of-a-workflow-for-editing-a-description-resource","text":"In this example, we add an inbox description to http://localhost:3000/foo/ . This allows discovery of the ldp:inbox as described in the Linked Data Notifications specification . We have started the CSS with the default configuration and have already created an inbox at http://localhost:3000/inbox/ . Since we don't know the location of the description resource, we first send a HEAD request to the resource to obtain the URL of its description resource. curl --head 'http://localhost:3000/foo/' which will produce a response with at least these headers: HTTP/1.1 200 OK Link: ; rel = \"describedby\" Now that we have the URL of the description resource, we create a patch for adding the inbox in the description of the resource. curl -X PATCH 'http://localhost:3000/foo/.meta' \\ -H 'Content-Type: text/n3' \\ --data-raw '@prefix solid: . <> a solid:InsertDeletePatch; solid:inserts { . }.' After this update, we can verify that the inbox is added by performing a GET request to the description resource curl 'http://localhost:3000/foo/.meta' With as result for the body @prefix dc: . @prefix ldp: . @prefix posix: . @prefix xsd: . a ldp : Container , ldp : BasicContainer , ldp : Resource ; dc : modified \"2022-06-09T08:17:07.000Z\" ^^ xsd : dateTime ; ldp : inbox ;. This can also be verified by sending a GET request to the subject resource itself. The inbox location can also be found in the Link headers. curl -v 'http://localhost:3000/foo/' HTTP/1.1 200 OK Link: ; rel = \"http://www.w3.org/ns/ldp#inbox\"","title":"Example of a workflow for editing a description resource"},{"location":"usage/seeding-pods/","text":"How to seed Accounts and Pods \u00b6 If you need to seed accounts and pods, the --seededPodConfigJson command line option can be used with as value the path to a JSON file containing configurations for every required pod. The file needs to contain an array of JSON objects, with each object containing at least a podName , email , and password field. For example: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" } ] You may optionally specify other parameters as described in the Identity Provider documentation . For example, to set up a pod without registering the generated WebID with the Identity Provider: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" , \"webId\" : \"https://id.inrupt.com/example\" , \"register\" : false } ] This feature cannot be used to register pods with pre-existing WebIDs, which requires an interactive validation step.","title":"Seeding pods"},{"location":"usage/seeding-pods/#how-to-seed-accounts-and-pods","text":"If you need to seed accounts and pods, the --seededPodConfigJson command line option can be used with as value the path to a JSON file containing configurations for every required pod. The file needs to contain an array of JSON objects, with each object containing at least a podName , email , and password field. For example: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" } ] You may optionally specify other parameters as described in the Identity Provider documentation . For example, to set up a pod without registering the generated WebID with the Identity Provider: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" , \"webId\" : \"https://id.inrupt.com/example\" , \"register\" : false } ] This feature cannot be used to register pods with pre-existing WebIDs, which requires an interactive validation step.","title":"How to seed Accounts and Pods"}]} \ No newline at end of file +{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome \u00b6 Welcome to the Community Solid Server! Here we will cover many aspects of the server, such as how to propose changes, what the architecture looks like, and how to use many of the features the server provides. The documentation here is still incomplete both in content and structure, so feel free to open a discussion about things you want to see added. While we try to update this documentation together with updates in the code, it is always possible we miss something, so please report it if you find incorrect information or links that no longer work. An introductory tutorial that gives a quick overview of the Solid and CSS basics can be found here . This is a good way to get started with the server and its setup. If you want to know what is new in the latest version, you can check out the release notes for a high level overview and information on how to migrate your configuration to the next version. A list that includes all minor changes can be found in the changelog Using the server \u00b6 Basic example HTTP requests Editing the metadata of a resource How to use the Identity Provider How to automate authentication How to automatically seed pods on startup What the internals look like \u00b6 How the server uses dependency injection What the architecture looks like Making changes \u00b6 How to make changes to the repository For core developers with push access only: How to release a new version","title":"Welcome"},{"location":"#welcome","text":"Welcome to the Community Solid Server! Here we will cover many aspects of the server, such as how to propose changes, what the architecture looks like, and how to use many of the features the server provides. The documentation here is still incomplete both in content and structure, so feel free to open a discussion about things you want to see added. While we try to update this documentation together with updates in the code, it is always possible we miss something, so please report it if you find incorrect information or links that no longer work. An introductory tutorial that gives a quick overview of the Solid and CSS basics can be found here . This is a good way to get started with the server and its setup. If you want to know what is new in the latest version, you can check out the release notes for a high level overview and information on how to migrate your configuration to the next version. A list that includes all minor changes can be found in the changelog","title":"Welcome"},{"location":"#using-the-server","text":"Basic example HTTP requests Editing the metadata of a resource How to use the Identity Provider How to automate authentication How to automatically seed pods on startup","title":"Using the server"},{"location":"#what-the-internals-look-like","text":"How the server uses dependency injection What the architecture looks like","title":"What the internals look like"},{"location":"#making-changes","text":"How to make changes to the repository For core developers with push access only: How to release a new version","title":"Making changes"},{"location":"architecture/core/","text":"Core building blocks \u00b6 There are several core building blocks used in many places of the server. These are described here. Handlers \u00b6 A very important building block that gets reused in many places is the AsyncHandler . The idea is that a handler has 2 important functions. canHandle determines if this class is capable of correctly handling the request, and throws an error if it can not. For example, a class that converts JSON-LD to turtle can handle all requests containing JSON-LD data, but does not know what to do with a request that contains a JPEG. The second function is handle where the class executes on the input data and returns the result. If an error gets thrown here it means there is an issue with the input. For example, if the input data claims to be JSON-LD but is actually not. The power of using this interface really shines when using certain utility classes. The one we use the most is the WaterfallHandler , which takes as input a list of handlers of the same type. The input and output of a WaterfallHandler is the same as those of its inputs, meaning it can be used in the same places. When doing a canHandle call, it will iterate over all its input handlers to find the first one where the canHandle call succeeds, and when calling handle it will return the result of that specific handler. This allows us to chain together many handlers that each have their specific niche, such as handler that each support a specific HTTP method (GET/PUT/POST/etc.), or handlers that only take requests targeting a specific subset of URLs. To the parent class it will look like it has a handler that supports all methods, while in practice it will be a WaterfallHandler containing all these separate handlers. Some other utility classes are the ParallelHandler that runs all handlers simultaneously, and the SequenceHandler that runs all of them one after the other. Since multiple handlers are executed here, these only work for handlers that have no output. Streams \u00b6 Almost all data is handled in a streaming fashion. This allows us to work with very large resources without having to fully load them in memory, a client could be reading data that is being returned by the server while the server is still reading the file. Internally this means we are mostly handling data as Readable objects. We actually use Guarded which is an internal format we created to help us with error handling. Such streams can be created using utility functions such as guardStream and guardedStreamFrom . Similarly, we have a pipeSafely to pipe streams in such a way that also helps with errors.","title":"Core"},{"location":"architecture/core/#core-building-blocks","text":"There are several core building blocks used in many places of the server. These are described here.","title":"Core building blocks"},{"location":"architecture/core/#handlers","text":"A very important building block that gets reused in many places is the AsyncHandler . The idea is that a handler has 2 important functions. canHandle determines if this class is capable of correctly handling the request, and throws an error if it can not. For example, a class that converts JSON-LD to turtle can handle all requests containing JSON-LD data, but does not know what to do with a request that contains a JPEG. The second function is handle where the class executes on the input data and returns the result. If an error gets thrown here it means there is an issue with the input. For example, if the input data claims to be JSON-LD but is actually not. The power of using this interface really shines when using certain utility classes. The one we use the most is the WaterfallHandler , which takes as input a list of handlers of the same type. The input and output of a WaterfallHandler is the same as those of its inputs, meaning it can be used in the same places. When doing a canHandle call, it will iterate over all its input handlers to find the first one where the canHandle call succeeds, and when calling handle it will return the result of that specific handler. This allows us to chain together many handlers that each have their specific niche, such as handler that each support a specific HTTP method (GET/PUT/POST/etc.), or handlers that only take requests targeting a specific subset of URLs. To the parent class it will look like it has a handler that supports all methods, while in practice it will be a WaterfallHandler containing all these separate handlers. Some other utility classes are the ParallelHandler that runs all handlers simultaneously, and the SequenceHandler that runs all of them one after the other. Since multiple handlers are executed here, these only work for handlers that have no output.","title":"Handlers"},{"location":"architecture/core/#streams","text":"Almost all data is handled in a streaming fashion. This allows us to work with very large resources without having to fully load them in memory, a client could be reading data that is being returned by the server while the server is still reading the file. Internally this means we are mostly handling data as Readable objects. We actually use Guarded which is an internal format we created to help us with error handling. Such streams can be created using utility functions such as guardStream and guardedStreamFrom . Similarly, we have a pipeSafely to pipe streams in such a way that also helps with errors.","title":"Streams"},{"location":"architecture/dependency-injection/","text":"Dependency injection \u00b6 The community server uses the dependency injection framework Components.js to link all class instances together, and uses Components-Generator.js to automatically generate the necessary description configurations of all classes. This framework allows us to configure our components in a JSON file. The advantage of this is that changing the configuration of components does not require any changes to the code, as one can just change the default configuration file, or provide a custom configuration file. More information can be found in the Components.js documentation , but a summarized overview can be found below. Component files \u00b6 Components.js requires a component file for every class you might want to instantiate. Fortunately those get generated automatically by Components-Generator.js. Calling npm run build will call the generator and generate those JSON-LD files in the dist folder. The generator uses the index.ts , so new classes always have to be added there or they will not get a component file. Configuration files \u00b6 Configuration files are how we tell Components.js which classes to instantiate and link together. All the community server configurations can be found in the config folder . That folder also contains information about how different pre-defined configurations can be used. A single component in such a configuration file might look as follows: { \"comment\" : \"Storage used for account management.\" , \"@id\" : \"urn:solid-server:default:AccountStorage\" , \"@type\" : \"JsonResourceStorage\" , \"source\" : { \"@id\" : \"urn:solid-server:default:ResourceStore\" }, \"baseUrl\" : { \"@id\" : \"urn:solid-server:default:variable:baseUrl\" }, \"container\" : \"/.internal/accounts/\" } With the corresponding constructor of the JsonResourceStorage class: public constructor ( source : ResourceStore , baseUrl : string , container : string ) The important elements here are the following: \"comment\" : (optional) A description of this component. \"@id\" : (optional) A unique identifier of this component, which allows it to be used as parameter values in different places. \"@type\" : The class name of the component. This must be a TypeScript class name that is exported via index.ts . As you can see from the constructor, the other fields are direct mappings from the constructor parameters. source references another object, which we refer to using its identifier urn:solid-server:default:ResourceStore . baseUrl is just a string, but here we use a variable that was set before calling Components.js which is why it also references an @id . These variables are set when starting up the server, based on the command line parameters.","title":"Dependency injection"},{"location":"architecture/dependency-injection/#dependency-injection","text":"The community server uses the dependency injection framework Components.js to link all class instances together, and uses Components-Generator.js to automatically generate the necessary description configurations of all classes. This framework allows us to configure our components in a JSON file. The advantage of this is that changing the configuration of components does not require any changes to the code, as one can just change the default configuration file, or provide a custom configuration file. More information can be found in the Components.js documentation , but a summarized overview can be found below.","title":"Dependency injection"},{"location":"architecture/dependency-injection/#component-files","text":"Components.js requires a component file for every class you might want to instantiate. Fortunately those get generated automatically by Components-Generator.js. Calling npm run build will call the generator and generate those JSON-LD files in the dist folder. The generator uses the index.ts , so new classes always have to be added there or they will not get a component file.","title":"Component files"},{"location":"architecture/dependency-injection/#configuration-files","text":"Configuration files are how we tell Components.js which classes to instantiate and link together. All the community server configurations can be found in the config folder . That folder also contains information about how different pre-defined configurations can be used. A single component in such a configuration file might look as follows: { \"comment\" : \"Storage used for account management.\" , \"@id\" : \"urn:solid-server:default:AccountStorage\" , \"@type\" : \"JsonResourceStorage\" , \"source\" : { \"@id\" : \"urn:solid-server:default:ResourceStore\" }, \"baseUrl\" : { \"@id\" : \"urn:solid-server:default:variable:baseUrl\" }, \"container\" : \"/.internal/accounts/\" } With the corresponding constructor of the JsonResourceStorage class: public constructor ( source : ResourceStore , baseUrl : string , container : string ) The important elements here are the following: \"comment\" : (optional) A description of this component. \"@id\" : (optional) A unique identifier of this component, which allows it to be used as parameter values in different places. \"@type\" : The class name of the component. This must be a TypeScript class name that is exported via index.ts . As you can see from the constructor, the other fields are direct mappings from the constructor parameters. source references another object, which we refer to using its identifier urn:solid-server:default:ResourceStore . baseUrl is just a string, but here we use a variable that was set before calling Components.js which is why it also references an @id . These variables are set when starting up the server, based on the command line parameters.","title":"Configuration files"},{"location":"architecture/overview/","text":"Architecture overview \u00b6 The initial architecture document the project was started from can be found here . Many things have been added since the original inception of the project, but the core ideas within that document are still valid. As can be seen from the architecture, an important idea is the modularity of all components. No actual implementations are defined there, only their interfaces. Making all the components independent of each other in such a way provides us with an enormous flexibility: they can all be replaced by a different implementation, without impacting anything else. This is how we can provide many different configurations for the server, and why it is impossible to provide ready solutions for all possible combinations. Architecture diagrams \u00b6 Having a modular architecture makes it more difficult to give a complete architecture overview. We will limit ourselves to the more commonly used default configurations we provide, and in certain cases we might give examples of what differences there are based on what configurations are being imported. To do this we will make use of architecture diagrams. We will use an example below to explain the formatting used throughout the architecture documentation: flowchart TD LdpHandler(\"LdpHandler
            ParsingHttphandler\") LdpHandler --> LdpHandlerArgs subgraph LdpHandlerArgs[\" \"] RequestParser(\"RequestParser
            BasicRequestParser\") Auth(\"
            AuthorizingHttpHandler\") ErrorHandler(\"ErrorHandler
            ErrorHandler\") ResponseWriter(\"ResponseWriter
            BasicResponseWriter\") end Below is a summary of how to interpret such diagrams: Rounded red box: component instantiated in the Components.js configuration . First line: Bold text : shorthand of the instance identifier. In case the full URI is not specified, it can usually be found by prepending urn:solid-server:default: to the shorthand identifier. (empty): this instance has no identifier and is defined in the same place as its parent. Second line: Regular text: The class of this instance. Italic text : The interface of this instance. Will be used if the actual class is not relevant for the explanation or can differ. Square grey box: the parameters of the linked instance. Arrow: links an instance to its parameters. Can also be used to indicate the order of parameters if relevant. For example, in the above, LdpHandler is a shorthand for the actual identifier urn:solid-server:default:LdpHandler and is an instance of ParsingHttpHandler . It has 4 parameters, one of which has no identifier but is an instance of AuthorizingHttpHandler . Features \u00b6 Below are the sections that go deeper into the features of the server and how those work. How Command Line Arguments are parsed and used How the server is initialized and started How HTTP requests are handled How the server handles a standard Solid request","title":"Overview"},{"location":"architecture/overview/#architecture-overview","text":"The initial architecture document the project was started from can be found here . Many things have been added since the original inception of the project, but the core ideas within that document are still valid. As can be seen from the architecture, an important idea is the modularity of all components. No actual implementations are defined there, only their interfaces. Making all the components independent of each other in such a way provides us with an enormous flexibility: they can all be replaced by a different implementation, without impacting anything else. This is how we can provide many different configurations for the server, and why it is impossible to provide ready solutions for all possible combinations.","title":"Architecture overview"},{"location":"architecture/overview/#architecture-diagrams","text":"Having a modular architecture makes it more difficult to give a complete architecture overview. We will limit ourselves to the more commonly used default configurations we provide, and in certain cases we might give examples of what differences there are based on what configurations are being imported. To do this we will make use of architecture diagrams. We will use an example below to explain the formatting used throughout the architecture documentation: flowchart TD LdpHandler(\"LdpHandler
            ParsingHttphandler\") LdpHandler --> LdpHandlerArgs subgraph LdpHandlerArgs[\" \"] RequestParser(\"RequestParser
            BasicRequestParser\") Auth(\"
            AuthorizingHttpHandler\") ErrorHandler(\"ErrorHandler
            ErrorHandler\") ResponseWriter(\"ResponseWriter
            BasicResponseWriter\") end Below is a summary of how to interpret such diagrams: Rounded red box: component instantiated in the Components.js configuration . First line: Bold text : shorthand of the instance identifier. In case the full URI is not specified, it can usually be found by prepending urn:solid-server:default: to the shorthand identifier. (empty): this instance has no identifier and is defined in the same place as its parent. Second line: Regular text: The class of this instance. Italic text : The interface of this instance. Will be used if the actual class is not relevant for the explanation or can differ. Square grey box: the parameters of the linked instance. Arrow: links an instance to its parameters. Can also be used to indicate the order of parameters if relevant. For example, in the above, LdpHandler is a shorthand for the actual identifier urn:solid-server:default:LdpHandler and is an instance of ParsingHttpHandler . It has 4 parameters, one of which has no identifier but is an instance of AuthorizingHttpHandler .","title":"Architecture diagrams"},{"location":"architecture/overview/#features","text":"Below are the sections that go deeper into the features of the server and how those work. How Command Line Arguments are parsed and used How the server is initialized and started How HTTP requests are handled How the server handles a standard Solid request","title":"Features"},{"location":"architecture/features/cli/","text":"Parsing Command line arguments \u00b6 When starting the server, the application actually uses Components.js twice to instantiate components. The first instantiation is used to parse the command line arguments. These then get converted into Components.js variables and are used to instantiate the actual server. Architecture \u00b6 flowchart TD CliResolver(\"CliResolver
            CliResolver\") CliResolver --> CliResolverArgs subgraph CliResolverArgs[\" \"] CliExtractor(\"CliExtractor
            YargsCliExtractor\") ShorthandResolver(\"ShorthandResolver
            CombinedShorthandResolver\") end ShorthandResolver --> ShorthandResolverArgs subgraph ShorthandResolverArgs[\" \"] BaseUrlExtractor(\"
            BaseUrlExtractor\") KeyExtractor(\"
            KeyExtractor\") AssetPathExtractor(\"
            AssetPathExtractor\") end The CliResolver ( urn:solid-server-app-setup:default:CliResolver ) is simply a way to combine both the CliExtractor ( urn:solid-server-app-setup:default:CliExtractor ) and ShorthandResolver ( urn:solid-server-app-setup:default:ShorthandResolver ) into a single object and has no other function. Which arguments are supported and which Components.js variables are generated can depend on the configuration that is being used. For example, for an HTTPS server additional arguments will be needed to specify the necessary key/cert files. CliResolver \u00b6 The CliResolver converts the incoming string of arguments into a key/value object. By default, a YargsCliExtractor is used, which makes use of the yargs library and is configured similarly. ShorthandResolver \u00b6 The ShorthandResolver uses the key/value object that was generated above to generate Components.js variable bindings. A CombinedShorthandResolver combines the results of multiple ShorthandExtractor by mapping their values to specific variables. For example, a BaseUrlExtractor will be used to extract the value for baseUrl , or port if no baseUrl value is provided, and use it to generate the value for the variable urn:solid-server:default:variable:baseUrl . These extractors are also where the default values for the server are defined. For example, BaseUrlExtractor will be instantiated with a default port of 3000 which will be used if no port is provided. The variables generated here will be used to initialize the server .","title":"Command line arguments"},{"location":"architecture/features/cli/#parsing-command-line-arguments","text":"When starting the server, the application actually uses Components.js twice to instantiate components. The first instantiation is used to parse the command line arguments. These then get converted into Components.js variables and are used to instantiate the actual server.","title":"Parsing Command line arguments"},{"location":"architecture/features/cli/#architecture","text":"flowchart TD CliResolver(\"CliResolver
            CliResolver\") CliResolver --> CliResolverArgs subgraph CliResolverArgs[\" \"] CliExtractor(\"CliExtractor
            YargsCliExtractor\") ShorthandResolver(\"ShorthandResolver
            CombinedShorthandResolver\") end ShorthandResolver --> ShorthandResolverArgs subgraph ShorthandResolverArgs[\" \"] BaseUrlExtractor(\"
            BaseUrlExtractor\") KeyExtractor(\"
            KeyExtractor\") AssetPathExtractor(\"
            AssetPathExtractor\") end The CliResolver ( urn:solid-server-app-setup:default:CliResolver ) is simply a way to combine both the CliExtractor ( urn:solid-server-app-setup:default:CliExtractor ) and ShorthandResolver ( urn:solid-server-app-setup:default:ShorthandResolver ) into a single object and has no other function. Which arguments are supported and which Components.js variables are generated can depend on the configuration that is being used. For example, for an HTTPS server additional arguments will be needed to specify the necessary key/cert files.","title":"Architecture"},{"location":"architecture/features/cli/#cliresolver","text":"The CliResolver converts the incoming string of arguments into a key/value object. By default, a YargsCliExtractor is used, which makes use of the yargs library and is configured similarly.","title":"CliResolver"},{"location":"architecture/features/cli/#shorthandresolver","text":"The ShorthandResolver uses the key/value object that was generated above to generate Components.js variable bindings. A CombinedShorthandResolver combines the results of multiple ShorthandExtractor by mapping their values to specific variables. For example, a BaseUrlExtractor will be used to extract the value for baseUrl , or port if no baseUrl value is provided, and use it to generate the value for the variable urn:solid-server:default:variable:baseUrl . These extractors are also where the default values for the server are defined. For example, BaseUrlExtractor will be instantiated with a default port of 3000 which will be used if no port is provided. The variables generated here will be used to initialize the server .","title":"ShorthandResolver"},{"location":"architecture/features/http-handler/","text":"Handling HTTP requests \u00b6 The direction of the arrows was changed slightly here to make the graph readable. flowchart LR HttpHandler(\"HttpHandler
            SequenceHandler\") HttpHandler --> HttpHandlerArgs subgraph HttpHandlerArgs[\" \"] direction LR Middleware(\"Middleware
            HttpHandler\") WaterfallHandler(\"
            WaterfallHandler\") end Middleware --> WaterfallHandler WaterfallHandler --> WaterfallHandlerArgs subgraph WaterfallHandlerArgs[\" \"] direction TB StaticAssetHandler(\"StaticAssetHandler
            StaticAssetHandler\") SetupHandler(\"SetupHandler
            HttpHandler\") OidcHandler(\"OidcHandler
            HttpHandler\") AuthResourceHttpHandler(\"AuthResourceHttpHandler
            HttpHandler\") IdentityProviderHttpHandler(\"IdentityProviderHttpHandler
            HttpHandler\") LdpHandler(\"LdpHandler
            HttpHandler\") end StaticAssetHandler --> SetupHandler SetupHandler --> OidcHandler OidcHandler --> AuthResourceHttpHandler AuthResourceHttpHandler --> IdentityProviderHttpHandler IdentityProviderHttpHandler --> LdpHandler The HttpHandler is responsible for handling an incoming HTTP request. The request will always first go through the Middleware , where certain required headers will be added such as CORS headers. After that it will go through the list in the WaterfallHandler to find the first handler that understands the request, with the LdpHandler at the bottom being the catch-all default. StaticAssetHandler \u00b6 The urn:solid-server:default:StaticAssetHandler matches exact URLs to static assets which require no further logic. An example of this is the favicon, where the /favicon.ico URL is directed to the favicon file at /templates/images/favicon.ico . It can also map entire folders to a specific path, such as /.well-known/css/styles/ which contains all stylesheets. SetupHandler \u00b6 The urn:solid-server:default:SetupHandler is responsible for redirecting all requests to /setup until setup is finished, thereby ensuring that setup needs to be finished before anything else can be done on the server, and handling the actual setup request that is sent to /setup . Once setup is finished, this handler will reject all requests and thus no longer be relevant. If the server is configured to not have setup enabled, the corresponding identifier will point to a handler that always rejects all requests. OidcHandler \u00b6 The urn:solid-server:default:OidcHandler handles all requests related to the Solid-OIDC specification . The OIDC component is configured to work on the /.oidc/ subpath, so this handler catches all those requests and sends them to the internal OIDC library that is used. AuthResourceHttpHandler \u00b6 The urn:solid-server:default:AuthResourceHttpHandler is identical to the urn:solid-server:default:LdpHandler which will be discussed below, but only handles resources relevant for authorization. In practice this means that is your server is configured to use Web Access Control for authorization, this handler will catch all requests targeting .acl resources. The reason these already need to be handled here is so these can also be used to allow authorization on the following handler(s). More on this can be found in the identity provider documentation IdentityProviderHttpHandler \u00b6 The urn:solid-server:default:IdentityProviderHttpHandler handles everything related to our custom identity provider API, such as registering, logging in, returning the relevant HTML pages, etc. All these requests are identified by being on the /idp/ subpath. More information on the API can be found in the identity provider documentation LdpHandler \u00b6 Once a request reaches the urn:solid-server:default:LdpHandler , the server assumes this is a standard Solid request according to the Solid protocol. A detailed description of what happens then can be found here","title":"HTTP requests"},{"location":"architecture/features/http-handler/#handling-http-requests","text":"The direction of the arrows was changed slightly here to make the graph readable. flowchart LR HttpHandler(\"HttpHandler
            SequenceHandler\") HttpHandler --> HttpHandlerArgs subgraph HttpHandlerArgs[\" \"] direction LR Middleware(\"Middleware
            HttpHandler\") WaterfallHandler(\"
            WaterfallHandler\") end Middleware --> WaterfallHandler WaterfallHandler --> WaterfallHandlerArgs subgraph WaterfallHandlerArgs[\" \"] direction TB StaticAssetHandler(\"StaticAssetHandler
            StaticAssetHandler\") SetupHandler(\"SetupHandler
            HttpHandler\") OidcHandler(\"OidcHandler
            HttpHandler\") AuthResourceHttpHandler(\"AuthResourceHttpHandler
            HttpHandler\") IdentityProviderHttpHandler(\"IdentityProviderHttpHandler
            HttpHandler\") LdpHandler(\"LdpHandler
            HttpHandler\") end StaticAssetHandler --> SetupHandler SetupHandler --> OidcHandler OidcHandler --> AuthResourceHttpHandler AuthResourceHttpHandler --> IdentityProviderHttpHandler IdentityProviderHttpHandler --> LdpHandler The HttpHandler is responsible for handling an incoming HTTP request. The request will always first go through the Middleware , where certain required headers will be added such as CORS headers. After that it will go through the list in the WaterfallHandler to find the first handler that understands the request, with the LdpHandler at the bottom being the catch-all default.","title":"Handling HTTP requests"},{"location":"architecture/features/http-handler/#staticassethandler","text":"The urn:solid-server:default:StaticAssetHandler matches exact URLs to static assets which require no further logic. An example of this is the favicon, where the /favicon.ico URL is directed to the favicon file at /templates/images/favicon.ico . It can also map entire folders to a specific path, such as /.well-known/css/styles/ which contains all stylesheets.","title":"StaticAssetHandler"},{"location":"architecture/features/http-handler/#setuphandler","text":"The urn:solid-server:default:SetupHandler is responsible for redirecting all requests to /setup until setup is finished, thereby ensuring that setup needs to be finished before anything else can be done on the server, and handling the actual setup request that is sent to /setup . Once setup is finished, this handler will reject all requests and thus no longer be relevant. If the server is configured to not have setup enabled, the corresponding identifier will point to a handler that always rejects all requests.","title":"SetupHandler"},{"location":"architecture/features/http-handler/#oidchandler","text":"The urn:solid-server:default:OidcHandler handles all requests related to the Solid-OIDC specification . The OIDC component is configured to work on the /.oidc/ subpath, so this handler catches all those requests and sends them to the internal OIDC library that is used.","title":"OidcHandler"},{"location":"architecture/features/http-handler/#authresourcehttphandler","text":"The urn:solid-server:default:AuthResourceHttpHandler is identical to the urn:solid-server:default:LdpHandler which will be discussed below, but only handles resources relevant for authorization. In practice this means that is your server is configured to use Web Access Control for authorization, this handler will catch all requests targeting .acl resources. The reason these already need to be handled here is so these can also be used to allow authorization on the following handler(s). More on this can be found in the identity provider documentation","title":"AuthResourceHttpHandler"},{"location":"architecture/features/http-handler/#identityproviderhttphandler","text":"The urn:solid-server:default:IdentityProviderHttpHandler handles everything related to our custom identity provider API, such as registering, logging in, returning the relevant HTML pages, etc. All these requests are identified by being on the /idp/ subpath. More information on the API can be found in the identity provider documentation","title":"IdentityProviderHttpHandler"},{"location":"architecture/features/http-handler/#ldphandler","text":"Once a request reaches the urn:solid-server:default:LdpHandler , the server assumes this is a standard Solid request according to the Solid protocol. A detailed description of what happens then can be found here","title":"LdpHandler"},{"location":"architecture/features/initialization/","text":"Server initialization \u00b6 When starting the server, multiple Initializers trigger to set up everything correctly, the last one of which starts listening to the specified port. Similarly, when stopping the server several Finalizers trigger to clean up where necessary, although the latter only happens when starting the server through code. App \u00b6 flowchart TD App(\"App
            App\") App --> AppArgs subgraph AppArgs[\" \"] Initializer(\"Initializer
            Initializer\") AppFinalizer(\"Finalizer
            Finalizer\") end App ( urn:solid-server:default:App ) is the main component that gets instantiated by Components.js. Every other component should be able to trace an instantiation path back to it if it also wants to be instantiated. It's only function is to contain an Initializer and Finalizer which get called by calling start / stop respectively. Initializer \u00b6 flowchart TD Initializer(\"Initializer
            SequenceHandler\") Initializer --> InitializerArgs subgraph InitializerArgs[\" \"] direction LR LoggerInitializer(\"LoggerInitializer
            LoggerInitializer\") PrimaryInitializer(\"PrimaryInitializer
            ProcessHandler\") WorkerInitializer(\"WorkerInitializer
            ProcessHandler\") end LoggerInitializer --> PrimaryInitializer PrimaryInitializer --> WorkerInitializer The very first thing that needs to happen is initializing the logger. Before this other classes will be unable to use logging. The PrimaryInitializer will only trigger once, in the primary worker thread, while the WorkerInitializer will trigger for every worker thread. Although if your server setup is single-threaded, which is the default, there is no relevant difference between those two. PrimaryInitializer \u00b6 flowchart TD PrimaryInitializer(\"PrimaryInitializer
            ProcessHandler\") PrimaryInitializer --> PrimarySequenceInitializer(\"PrimarySequenceInitializer
            SequenceHandler\") PrimarySequenceInitializer --> PrimarySequenceInitializerArgs subgraph PrimarySequenceInitializerArgs[\" \"] direction LR CleanupInitializer(\"CleanupInitializer
            SequenceHandler\") PrimaryParallelInitializer(\"PrimaryParallelInitializer
            ParallelHandler\") WorkerManager(\"WorkerManager
            WorkerManager\") end CleanupInitializer --> PrimaryParallelInitializer PrimaryParallelInitializer --> WorkerManager The above is a simplification of all the initializers that are present in the PrimaryInitializer as there are several smaller initializers that also trigger but are less relevant here. The CleanupInitializer is an initializer that cleans up anything that might have remained from a previous server start and could impact behaviour. Relevant components in other parts of the configuration are responsible for adding themselves to this array if needed. An example of this is file-based locking components which might need to remove any dangling locking files. The PrimaryParallelInitializer can be used to add any initializers to that have to happen in the primary process. This makes it easier for users to add initializers by being able to append to its handlers. The WorkerManager is responsible for setting up the worker threads, if any. WorkerInitializer \u00b6 flowchart TD WorkerInitializer(\"WorkerInitializer
            ProcessHandler\") WorkerInitializer --> WorkerSequenceInitializer(\"WorkerSequenceInitializer
            SequenceHandler\") WorkerSequenceInitializer --> WorkerSequenceInitializerArgs subgraph WorkerSequenceInitializerArgs[\" \"] direction LR WorkerParallelInitializer(\"WorkerParallelInitializer
            ParallelHandler\") ServerInitializer(\"ServerInitializer
            ServerInitializer\") end WorkerParallelInitializer --> ServerInitializer The WorkerInitializer is quite similar to the PrimaryInitializer but triggers once per worker thread. Like the PrimaryParallelInitializer , the WorkerParallelInitializer can be used to add any custom initializers that need to run. ServerInitializer \u00b6 The ServerInitializer is the initializer that finally starts up the server by listening to the relevant port, once all the initialization described above is finished. This is an example of a component that differs based on some of the choices made during configuration. flowchart TD ServerInitializer(\"ServerInitializer
            ServerInitializer\") ServerInitializer --> WebSocketServerFactory(\"ServerFactory
            WebSocketServerFactory\") WebSocketServerFactory --> BaseHttpServerFactory(\"
            BaseHttpServerFactory\") BaseHttpServerFactory --> HttpHandler(\"HttpHandler
            HttpHandler\") ServerInitializer2(\"ServerInitializer
            ServerInitializer\") ServerInitializer2 ---> BaseHttpServerFactory2(\"ServerFactory
            BaseHttpServerFactory\") BaseHttpServerFactory2 --> HttpHandler2(\"HttpHandler
            HttpHandler\") Depending on if the configurations necessary for websockets are imported or not, the urn:solid-server:default:ServerFactory identifier will point to a different resource. There will always be a BaseHttpServerFactory that starts the HTTP(S) server, but there might also be a WebSocketServerFactory wrapped around it to handle websocket support. Although not indicated here, the parameters for initializing the BaseHttpServerFactory might also differ in case an HTTPS configuration is imported. The HttpHandler it takes as input is responsible for how HTTP requests get resolved .","title":"Server initialization"},{"location":"architecture/features/initialization/#server-initialization","text":"When starting the server, multiple Initializers trigger to set up everything correctly, the last one of which starts listening to the specified port. Similarly, when stopping the server several Finalizers trigger to clean up where necessary, although the latter only happens when starting the server through code.","title":"Server initialization"},{"location":"architecture/features/initialization/#app","text":"flowchart TD App(\"App
            App\") App --> AppArgs subgraph AppArgs[\" \"] Initializer(\"Initializer
            Initializer\") AppFinalizer(\"Finalizer
            Finalizer\") end App ( urn:solid-server:default:App ) is the main component that gets instantiated by Components.js. Every other component should be able to trace an instantiation path back to it if it also wants to be instantiated. It's only function is to contain an Initializer and Finalizer which get called by calling start / stop respectively.","title":"App"},{"location":"architecture/features/initialization/#initializer","text":"flowchart TD Initializer(\"Initializer
            SequenceHandler\") Initializer --> InitializerArgs subgraph InitializerArgs[\" \"] direction LR LoggerInitializer(\"LoggerInitializer
            LoggerInitializer\") PrimaryInitializer(\"PrimaryInitializer
            ProcessHandler\") WorkerInitializer(\"WorkerInitializer
            ProcessHandler\") end LoggerInitializer --> PrimaryInitializer PrimaryInitializer --> WorkerInitializer The very first thing that needs to happen is initializing the logger. Before this other classes will be unable to use logging. The PrimaryInitializer will only trigger once, in the primary worker thread, while the WorkerInitializer will trigger for every worker thread. Although if your server setup is single-threaded, which is the default, there is no relevant difference between those two.","title":"Initializer"},{"location":"architecture/features/initialization/#primaryinitializer","text":"flowchart TD PrimaryInitializer(\"PrimaryInitializer
            ProcessHandler\") PrimaryInitializer --> PrimarySequenceInitializer(\"PrimarySequenceInitializer
            SequenceHandler\") PrimarySequenceInitializer --> PrimarySequenceInitializerArgs subgraph PrimarySequenceInitializerArgs[\" \"] direction LR CleanupInitializer(\"CleanupInitializer
            SequenceHandler\") PrimaryParallelInitializer(\"PrimaryParallelInitializer
            ParallelHandler\") WorkerManager(\"WorkerManager
            WorkerManager\") end CleanupInitializer --> PrimaryParallelInitializer PrimaryParallelInitializer --> WorkerManager The above is a simplification of all the initializers that are present in the PrimaryInitializer as there are several smaller initializers that also trigger but are less relevant here. The CleanupInitializer is an initializer that cleans up anything that might have remained from a previous server start and could impact behaviour. Relevant components in other parts of the configuration are responsible for adding themselves to this array if needed. An example of this is file-based locking components which might need to remove any dangling locking files. The PrimaryParallelInitializer can be used to add any initializers to that have to happen in the primary process. This makes it easier for users to add initializers by being able to append to its handlers. The WorkerManager is responsible for setting up the worker threads, if any.","title":"PrimaryInitializer"},{"location":"architecture/features/initialization/#workerinitializer","text":"flowchart TD WorkerInitializer(\"WorkerInitializer
            ProcessHandler\") WorkerInitializer --> WorkerSequenceInitializer(\"WorkerSequenceInitializer
            SequenceHandler\") WorkerSequenceInitializer --> WorkerSequenceInitializerArgs subgraph WorkerSequenceInitializerArgs[\" \"] direction LR WorkerParallelInitializer(\"WorkerParallelInitializer
            ParallelHandler\") ServerInitializer(\"ServerInitializer
            ServerInitializer\") end WorkerParallelInitializer --> ServerInitializer The WorkerInitializer is quite similar to the PrimaryInitializer but triggers once per worker thread. Like the PrimaryParallelInitializer , the WorkerParallelInitializer can be used to add any custom initializers that need to run.","title":"WorkerInitializer"},{"location":"architecture/features/initialization/#serverinitializer","text":"The ServerInitializer is the initializer that finally starts up the server by listening to the relevant port, once all the initialization described above is finished. This is an example of a component that differs based on some of the choices made during configuration. flowchart TD ServerInitializer(\"ServerInitializer
            ServerInitializer\") ServerInitializer --> WebSocketServerFactory(\"ServerFactory
            WebSocketServerFactory\") WebSocketServerFactory --> BaseHttpServerFactory(\"
            BaseHttpServerFactory\") BaseHttpServerFactory --> HttpHandler(\"HttpHandler
            HttpHandler\") ServerInitializer2(\"ServerInitializer
            ServerInitializer\") ServerInitializer2 ---> BaseHttpServerFactory2(\"ServerFactory
            BaseHttpServerFactory\") BaseHttpServerFactory2 --> HttpHandler2(\"HttpHandler
            HttpHandler\") Depending on if the configurations necessary for websockets are imported or not, the urn:solid-server:default:ServerFactory identifier will point to a different resource. There will always be a BaseHttpServerFactory that starts the HTTP(S) server, but there might also be a WebSocketServerFactory wrapped around it to handle websocket support. Although not indicated here, the parameters for initializing the BaseHttpServerFactory might also differ in case an HTTPS configuration is imported. The HttpHandler it takes as input is responsible for how HTTP requests get resolved .","title":"ServerInitializer"},{"location":"architecture/features/protocol/authorization/","text":"Authorization \u00b6 flowchart TD AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") AuthorizingHttpHandler --> AuthorizingHttpHandlerArgs subgraph AuthorizingHttpHandlerArgs[\" \"] CredentialsExtractor(\"CredentialsExtractor
            CredentialsExtractor\") ModesExtractor(\"ModesExtractor
            ModesExtractor\") PermissionReader(\"PermissionReader
            PermissionReader\") Authorizer(\"Authorizer
            PermissionBasedAuthorizer\") OperationHttpHandler(\"
            OperationHttpHandler\") end Authorization is usually handled by the AuthorizingHttpHandler , which receives a parsed HTTP request in the form of an Operation . It goes through the following steps: A CredentialsExtractor identifies the credentials of the agent making the call. A ModesExtractor finds which access modes are needed for which resources. A PermissionReader determines the permissions the agent has on the targeted resources. The above results are compared in an Authorizer . If the request is allowed, call the OperationHttpHandler , otherwise throw an error. Authentication \u00b6 There are multiple CredentialsExtractor s that each determine identity in a different way. Potentially multiple extractors can apply, making a requesting agent have multiple credentials. The diagram below shows the default configuration if authentication is enabled. flowchart TD CredentialsExtractor(\"CredentialsExtractor
            UnionCredentialsExtractor\") CredentialsExtractor --> CredentialsExtractorArgs subgraph CredentialsExtractorArgs[\" \"] WaterfallHandler(\"
            WaterfallHandler\") PublicCredentialsExtractor(\"
            PublicCredentialsExtractor\") end WaterfallHandler --> WaterfallHandlerArgs subgraph WaterfallHandlerArgs[\" \"] direction LR DPoPWebIdExtractor(\"
            DPoPWebIdExtractor\") --> BearerWebIdExtractor(\"
            BearerWebIdExtractor\") end Both of the WebID extractors make use of the access-token-verifier library to parse incoming tokens based on the Solid-OIDC specification . Besides those there are always the public credentials, which everyone has. All these credentials then get combined into a single union object. If successful, a CredentialsExtractor will return a key/value map linking the type of credentials to their specific values. There are also debug configuration options available that can be used to simulate credentials. These can be enabled as different options through the config/ldp/authentication imports. Modes extraction \u00b6 Access modes are a predefined list of read , write , append , create and delete . The ModesExtractor determine which modes will be necessary and for which resources, based on the request contents. flowchart TD ModesExtractor(\"ModesExtractor
            IntermediateCreateExtractor\") ModesExtractor --> HttpModesExtractor(\"HttpModesExtractor
            WaterfallHandler\") HttpModesExtractor --> HttpModesExtractorArgs subgraph HttpModesExtractorArgs[\" \"] direction LR PatchModesExtractor(\"PatchModesExtractor
            ModesExtractor\") --> MethodModesExtractor(\"
            MethodModesExtractor\") end The IntermediateCreateExtractor is responsible if requests try to create intermediate containers with a single request. E.g., a PUT request to /foo/bar/baz should create both the /foo/ and /foo/bar/ containers in case they do not exist yet. This extractor makes sure that create permissions are also checked on those containers. Modes can usually be determined based on just the HTTP methods, which is what the MethodModesExtractor does. A GET request will always need the read mode for example. The only exception are PATCH requests, where the necessary modes depend on the body and the PATCH type. flowchart TD PatchModesExtractor(\"PatchModesExtractor
            WaterfallHandler\") --> PatchModesExtractorArgs subgraph PatchModesExtractorArgs[\" \"] N3PatchModesExtractor(\"
            N3PatchModesExtractor\") SparqlUpdateModesExtractor(\"
            SparqlUpdateModesExtractor\") end The server supports both N3 Patch and SPARQL Update PATCH requests. In both cases it will parse the bodies to determine what the impact would be of the request and what modes it requires. Permission reading \u00b6 PermissionReaders take the input of the above to determine which permissions are available for which credentials. The modes from the previous step are not yet needed, but can be used as optimization as we only need to know if we have permission on those modes. Each reader returns all the information it can find based on the resources and modes it receives. In the default configuration the following readers are combined when WebACL is enabled as authorization method. In case authorization is disabled by changing the authorization import to config/ldp/authorization/allow-all.json , this diagram is just a class that always returns all permissions. flowchart TD PermissionReader(\"PermissionReader
            AuxiliaryReader\") PermissionReader --> UnionPermissionReader(\"
            UnionPermissionReader\") UnionPermissionReader --> UnionPermissionReaderArgs subgraph UnionPermissionReaderArgs[\" \"] PathBasedReader(\"PathBasedReader
            PathBasedReader\") OwnerPermissionReader(\"OwnerPermissionReader
            OwnerPermissionReader\") WrappedWebAclReader(\"WrappedWebAclReader
            ParentContainerReader\") end WrappedWebAclReader --> WebAclAuxiliaryReader(\"WebAclAuxiliaryReader
            WebAclAuxiliaryReader\") WebAclAuxiliaryReader --> WebAclReader(\"WebAclReader
            WebAclReader\") The first thing that happens is that if the target is an auxiliary resource that uses the authorization of its subject resource, the AuxiliaryReader inserts that identifier instead. An example of this is if the requests targets the metadata of a resource. The UnionPermissionReader then combines the results of its readers into a single permission object. If one reader rejects a specific mode and another allows it, the rejection takes priority. The PathBasedReader rejects all permissions for certain paths. This is used to prevent access to the internal data of the server. The OwnerPermissionReader makes sure owners always have control access to the pods they created on the server . Users will always be able to modify the ACL resources in their pod, even if they accidentally removed their own access. The final readers are specifically relevant for the WebACL algorithm. The ParentContainerReader checks the permissions on a parent resource if required: creating a resource requires append permissions on the parent container, while deleting a resource requires write permissions there. In case the target is an ACL resource, control permissions need to be checked, no matter what mode was generated by the ModesExtractor . The WebAclAuxiliaryReader makes sure this conversion happens. Finally, the WebAclReader implements the efffective ACL resource algorithm and returns the permissions it finds in that resource. In case no ACL resource is found this indicates a configuration error and no permissions will be granted. Authorization \u00b6 All the results of the previous steps then get combined in the PermissionBasedAuthorizer to either allow or reject a request. If no permissions are found for a requested mode, or they are explicitly forbidden, a 401/403 will be returned, depending on if the agent was logged in or not.","title":"Authorization"},{"location":"architecture/features/protocol/authorization/#authorization","text":"flowchart TD AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") AuthorizingHttpHandler --> AuthorizingHttpHandlerArgs subgraph AuthorizingHttpHandlerArgs[\" \"] CredentialsExtractor(\"CredentialsExtractor
            CredentialsExtractor\") ModesExtractor(\"ModesExtractor
            ModesExtractor\") PermissionReader(\"PermissionReader
            PermissionReader\") Authorizer(\"Authorizer
            PermissionBasedAuthorizer\") OperationHttpHandler(\"
            OperationHttpHandler\") end Authorization is usually handled by the AuthorizingHttpHandler , which receives a parsed HTTP request in the form of an Operation . It goes through the following steps: A CredentialsExtractor identifies the credentials of the agent making the call. A ModesExtractor finds which access modes are needed for which resources. A PermissionReader determines the permissions the agent has on the targeted resources. The above results are compared in an Authorizer . If the request is allowed, call the OperationHttpHandler , otherwise throw an error.","title":"Authorization"},{"location":"architecture/features/protocol/authorization/#authentication","text":"There are multiple CredentialsExtractor s that each determine identity in a different way. Potentially multiple extractors can apply, making a requesting agent have multiple credentials. The diagram below shows the default configuration if authentication is enabled. flowchart TD CredentialsExtractor(\"CredentialsExtractor
            UnionCredentialsExtractor\") CredentialsExtractor --> CredentialsExtractorArgs subgraph CredentialsExtractorArgs[\" \"] WaterfallHandler(\"
            WaterfallHandler\") PublicCredentialsExtractor(\"
            PublicCredentialsExtractor\") end WaterfallHandler --> WaterfallHandlerArgs subgraph WaterfallHandlerArgs[\" \"] direction LR DPoPWebIdExtractor(\"
            DPoPWebIdExtractor\") --> BearerWebIdExtractor(\"
            BearerWebIdExtractor\") end Both of the WebID extractors make use of the access-token-verifier library to parse incoming tokens based on the Solid-OIDC specification . Besides those there are always the public credentials, which everyone has. All these credentials then get combined into a single union object. If successful, a CredentialsExtractor will return a key/value map linking the type of credentials to their specific values. There are also debug configuration options available that can be used to simulate credentials. These can be enabled as different options through the config/ldp/authentication imports.","title":"Authentication"},{"location":"architecture/features/protocol/authorization/#modes-extraction","text":"Access modes are a predefined list of read , write , append , create and delete . The ModesExtractor determine which modes will be necessary and for which resources, based on the request contents. flowchart TD ModesExtractor(\"ModesExtractor
            IntermediateCreateExtractor\") ModesExtractor --> HttpModesExtractor(\"HttpModesExtractor
            WaterfallHandler\") HttpModesExtractor --> HttpModesExtractorArgs subgraph HttpModesExtractorArgs[\" \"] direction LR PatchModesExtractor(\"PatchModesExtractor
            ModesExtractor\") --> MethodModesExtractor(\"
            MethodModesExtractor\") end The IntermediateCreateExtractor is responsible if requests try to create intermediate containers with a single request. E.g., a PUT request to /foo/bar/baz should create both the /foo/ and /foo/bar/ containers in case they do not exist yet. This extractor makes sure that create permissions are also checked on those containers. Modes can usually be determined based on just the HTTP methods, which is what the MethodModesExtractor does. A GET request will always need the read mode for example. The only exception are PATCH requests, where the necessary modes depend on the body and the PATCH type. flowchart TD PatchModesExtractor(\"PatchModesExtractor
            WaterfallHandler\") --> PatchModesExtractorArgs subgraph PatchModesExtractorArgs[\" \"] N3PatchModesExtractor(\"
            N3PatchModesExtractor\") SparqlUpdateModesExtractor(\"
            SparqlUpdateModesExtractor\") end The server supports both N3 Patch and SPARQL Update PATCH requests. In both cases it will parse the bodies to determine what the impact would be of the request and what modes it requires.","title":"Modes extraction"},{"location":"architecture/features/protocol/authorization/#permission-reading","text":"PermissionReaders take the input of the above to determine which permissions are available for which credentials. The modes from the previous step are not yet needed, but can be used as optimization as we only need to know if we have permission on those modes. Each reader returns all the information it can find based on the resources and modes it receives. In the default configuration the following readers are combined when WebACL is enabled as authorization method. In case authorization is disabled by changing the authorization import to config/ldp/authorization/allow-all.json , this diagram is just a class that always returns all permissions. flowchart TD PermissionReader(\"PermissionReader
            AuxiliaryReader\") PermissionReader --> UnionPermissionReader(\"
            UnionPermissionReader\") UnionPermissionReader --> UnionPermissionReaderArgs subgraph UnionPermissionReaderArgs[\" \"] PathBasedReader(\"PathBasedReader
            PathBasedReader\") OwnerPermissionReader(\"OwnerPermissionReader
            OwnerPermissionReader\") WrappedWebAclReader(\"WrappedWebAclReader
            ParentContainerReader\") end WrappedWebAclReader --> WebAclAuxiliaryReader(\"WebAclAuxiliaryReader
            WebAclAuxiliaryReader\") WebAclAuxiliaryReader --> WebAclReader(\"WebAclReader
            WebAclReader\") The first thing that happens is that if the target is an auxiliary resource that uses the authorization of its subject resource, the AuxiliaryReader inserts that identifier instead. An example of this is if the requests targets the metadata of a resource. The UnionPermissionReader then combines the results of its readers into a single permission object. If one reader rejects a specific mode and another allows it, the rejection takes priority. The PathBasedReader rejects all permissions for certain paths. This is used to prevent access to the internal data of the server. The OwnerPermissionReader makes sure owners always have control access to the pods they created on the server . Users will always be able to modify the ACL resources in their pod, even if they accidentally removed their own access. The final readers are specifically relevant for the WebACL algorithm. The ParentContainerReader checks the permissions on a parent resource if required: creating a resource requires append permissions on the parent container, while deleting a resource requires write permissions there. In case the target is an ACL resource, control permissions need to be checked, no matter what mode was generated by the ModesExtractor . The WebAclAuxiliaryReader makes sure this conversion happens. Finally, the WebAclReader implements the efffective ACL resource algorithm and returns the permissions it finds in that resource. In case no ACL resource is found this indicates a configuration error and no permissions will be granted.","title":"Permission reading"},{"location":"architecture/features/protocol/authorization/#authorization_1","text":"All the results of the previous steps then get combined in the PermissionBasedAuthorizer to either allow or reject a request. If no permissions are found for a requested mode, or they are explicitly forbidden, a 401/403 will be returned, depending on if the agent was logged in or not.","title":"Authorization"},{"location":"architecture/features/protocol/overview/","text":"Solid protocol \u00b6 The LdpHandler , named as a reference to the Linked Data Platform specification, chains several handlers together, each with their own specific purpose, to fully resolve the HTTP request. It specifically handles Solid requests as described in the protocol specification , e.g. a POST request to create a new resource. Below is a simplified view of how these handlers are linked. flowchart LR LdpHandler(\"LdpHandler
            ParsingHttphandler\") LdpHandler --> AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") AuthorizingHttpHandler --> OperationHandler(\"OperationHandler
            OperationHandler\") OperationHandler --> ResourceStore(\"ResourceStore
            ResourceStore\") A standard request would go through the following steps: The ParsingHttphandler parses the HTTP request into a manageable format, both body and metadata such as headers. The AuthorizingHttpHandler verifies if the request is authorized to access the targeted resource. The OperationHandler determines which action is required based on the HTTP method. The ResourceStore does all the relevant data work. The ParsingHttphandler eventually receives the response data, or an error, and handles the output. Below are sections that go deeper into the specific steps. How input gets parsed and output gets returned How authentication and authorization work What the ResourceStore looks like","title":"Overview"},{"location":"architecture/features/protocol/overview/#solid-protocol","text":"The LdpHandler , named as a reference to the Linked Data Platform specification, chains several handlers together, each with their own specific purpose, to fully resolve the HTTP request. It specifically handles Solid requests as described in the protocol specification , e.g. a POST request to create a new resource. Below is a simplified view of how these handlers are linked. flowchart LR LdpHandler(\"LdpHandler
            ParsingHttphandler\") LdpHandler --> AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") AuthorizingHttpHandler --> OperationHandler(\"OperationHandler
            OperationHandler\") OperationHandler --> ResourceStore(\"ResourceStore
            ResourceStore\") A standard request would go through the following steps: The ParsingHttphandler parses the HTTP request into a manageable format, both body and metadata such as headers. The AuthorizingHttpHandler verifies if the request is authorized to access the targeted resource. The OperationHandler determines which action is required based on the HTTP method. The ResourceStore does all the relevant data work. The ParsingHttphandler eventually receives the response data, or an error, and handles the output. Below are sections that go deeper into the specific steps. How input gets parsed and output gets returned How authentication and authorization work What the ResourceStore looks like","title":"Solid protocol"},{"location":"architecture/features/protocol/parsing/","text":"Parsing and responding to HTTP requests \u00b6 flowchart TD ParsingHttphandler(\"
            ParsingHttphandler\") ParsingHttphandler --> ParsingHttphandlerArgs subgraph ParsingHttphandlerArgs[\" \"] RequestParser(\"RequestParser
            BasicRequestParser\") AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") ErrorHandler(\"ErrorHandler
            ErrorHandler\") ResponseWriter(\"ResponseWriter
            BasicResponseWriter\") end A ParsingHttpHandler handles both the parsing of the input data, and the serializing of the output data. It follows these 3 steps: Use the RequestParser to convert the incoming data into an Operation . Send the Operation to the AuthorizingHttpHandler to receive either a Representation if the operation was a success, or an Error in case something went wrong. In case of an error the ErrorHandler will convert the Error into a ResponseDescription . Use the ResponseWriter to output the ResponseDescription as an HTTP response. Parsing the request \u00b6 flowchart TD RequestParser(\"RequestParser
            BasicRequestParser\") --> RequestParserArgs subgraph RequestParserArgs[\" \"] TargetExtractor(\"TargetExtractor
            OriginalUrlExtractor\") PreferenceParser(\"PreferenceParser
            AcceptPreferenceParser\") MetadataParser(\"MetadataParser
            MetadataParser\") BodyParser(\"
            Bodyparser\") Conditions(\"
            BasicConditionsParser\") end OriginalUrlExtractor --> IdentifierStrategy(\"IdentifierStrategy
            IdentifierStrategy\") The BasicRequestParser is mostly an aggregator of multiple smaller parsers that each handle a very specific part. URL \u00b6 This is a single class, the OriginalUrlExtractor , but fulfills the very important role of making sure input URLs are handled consistently. The query parameters will always be completely removed from the URL. There is also an algorithm to make sure all URLs have a \"canonical\" version as for example both & and %26 can be interpreted in the same way. Specifically all special characters will be encoded into their percent encoding. The IdentifierStrategy it gets as input is used to determine if the resulting URL is within the scope of the server. This can differ depending on if the server uses subdomains or not. The resulting identifier will be stored in the target field of an Operation object. Preferences \u00b6 The AcceptPreferenceParser parses the Accept header and all the relevant Accept-* headers. These will all be put into the preferences field of an Operation object. These will later be used to handle the content negotiation. For example, when sending an Accept: text/turtle; q=0.9 header, this wil result in the preferences object { type: { 'text/turtle': 0.9 } } . Headers \u00b6 Several other headers can have relevant metadata, such as the Content-Type header, or the Link: ; rel=\"type\" header which is used to indicate to the server that a request intends to create a container. Such headers are converted to RDF triples and stored in the RepresentationMetadata object, which will be part of the body field in the Operation . The default MetadataParser is a ParallelHandler that contains several smaller parsers, each looking at a specific header. Body \u00b6 In case of most requests, the input data stream is used directly in the body field of the Operation , with a few minor checks to make sure the HTTP specification is being followed. In the case of PATCH requests though, there are several specific body parsers that will convert the request into a JavaScript object containing all the necessary information to execute such a PATCH. Several validation checks will already take place there as well. Conditions \u00b6 The BasicConditionsParser parses everything related to conditions headers, such as if-none-match or if-modified-since , and stores the relevant information in the conditions field of the Operation . These will later be used to make sure the request should be aborted or not. Sending the response \u00b6 In case a request is successful, the AuthorizingHttpHandler will return a ResponseDescription , and if not it will throw an error. In case an error gets thrown, this will be caught by the ErrorHandler and converted into a ResponseDescription . The request preferences will be used to make sure the serialization is one that is preferred. Either way we will have a ResponseDescription , which will be sent to the BasicResponseWriter to convert into output headers, data and a status code. To convert the metadata into headers, it uses a MetadataWriter , which functions as the reverse of the MetadataParser mentioned above: it has multiple writers which each convert certain metadata into a specific header.","title":"Parsing"},{"location":"architecture/features/protocol/parsing/#parsing-and-responding-to-http-requests","text":"flowchart TD ParsingHttphandler(\"
            ParsingHttphandler\") ParsingHttphandler --> ParsingHttphandlerArgs subgraph ParsingHttphandlerArgs[\" \"] RequestParser(\"RequestParser
            BasicRequestParser\") AuthorizingHttpHandler(\"
            AuthorizingHttpHandler\") ErrorHandler(\"ErrorHandler
            ErrorHandler\") ResponseWriter(\"ResponseWriter
            BasicResponseWriter\") end A ParsingHttpHandler handles both the parsing of the input data, and the serializing of the output data. It follows these 3 steps: Use the RequestParser to convert the incoming data into an Operation . Send the Operation to the AuthorizingHttpHandler to receive either a Representation if the operation was a success, or an Error in case something went wrong. In case of an error the ErrorHandler will convert the Error into a ResponseDescription . Use the ResponseWriter to output the ResponseDescription as an HTTP response.","title":"Parsing and responding to HTTP requests"},{"location":"architecture/features/protocol/parsing/#parsing-the-request","text":"flowchart TD RequestParser(\"RequestParser
            BasicRequestParser\") --> RequestParserArgs subgraph RequestParserArgs[\" \"] TargetExtractor(\"TargetExtractor
            OriginalUrlExtractor\") PreferenceParser(\"PreferenceParser
            AcceptPreferenceParser\") MetadataParser(\"MetadataParser
            MetadataParser\") BodyParser(\"
            Bodyparser\") Conditions(\"
            BasicConditionsParser\") end OriginalUrlExtractor --> IdentifierStrategy(\"IdentifierStrategy
            IdentifierStrategy\") The BasicRequestParser is mostly an aggregator of multiple smaller parsers that each handle a very specific part.","title":"Parsing the request"},{"location":"architecture/features/protocol/parsing/#url","text":"This is a single class, the OriginalUrlExtractor , but fulfills the very important role of making sure input URLs are handled consistently. The query parameters will always be completely removed from the URL. There is also an algorithm to make sure all URLs have a \"canonical\" version as for example both & and %26 can be interpreted in the same way. Specifically all special characters will be encoded into their percent encoding. The IdentifierStrategy it gets as input is used to determine if the resulting URL is within the scope of the server. This can differ depending on if the server uses subdomains or not. The resulting identifier will be stored in the target field of an Operation object.","title":"URL"},{"location":"architecture/features/protocol/parsing/#preferences","text":"The AcceptPreferenceParser parses the Accept header and all the relevant Accept-* headers. These will all be put into the preferences field of an Operation object. These will later be used to handle the content negotiation. For example, when sending an Accept: text/turtle; q=0.9 header, this wil result in the preferences object { type: { 'text/turtle': 0.9 } } .","title":"Preferences"},{"location":"architecture/features/protocol/parsing/#headers","text":"Several other headers can have relevant metadata, such as the Content-Type header, or the Link: ; rel=\"type\" header which is used to indicate to the server that a request intends to create a container. Such headers are converted to RDF triples and stored in the RepresentationMetadata object, which will be part of the body field in the Operation . The default MetadataParser is a ParallelHandler that contains several smaller parsers, each looking at a specific header.","title":"Headers"},{"location":"architecture/features/protocol/parsing/#body","text":"In case of most requests, the input data stream is used directly in the body field of the Operation , with a few minor checks to make sure the HTTP specification is being followed. In the case of PATCH requests though, there are several specific body parsers that will convert the request into a JavaScript object containing all the necessary information to execute such a PATCH. Several validation checks will already take place there as well.","title":"Body"},{"location":"architecture/features/protocol/parsing/#conditions","text":"The BasicConditionsParser parses everything related to conditions headers, such as if-none-match or if-modified-since , and stores the relevant information in the conditions field of the Operation . These will later be used to make sure the request should be aborted or not.","title":"Conditions"},{"location":"architecture/features/protocol/parsing/#sending-the-response","text":"In case a request is successful, the AuthorizingHttpHandler will return a ResponseDescription , and if not it will throw an error. In case an error gets thrown, this will be caught by the ErrorHandler and converted into a ResponseDescription . The request preferences will be used to make sure the serialization is one that is preferred. Either way we will have a ResponseDescription , which will be sent to the BasicResponseWriter to convert into output headers, data and a status code. To convert the metadata into headers, it uses a MetadataWriter , which functions as the reverse of the MetadataParser mentioned above: it has multiple writers which each convert certain metadata into a specific header.","title":"Sending the response"},{"location":"architecture/features/protocol/resource-store/","text":"Resource store \u00b6 The interface of a ResourceStore is mostly a 1-to-1 mapping of the HTTP methods: GET: getRepresentation PUT: setRepresentation POST: addResource DELETE: deleteResource PATCH: modifyResource The corresponding OperationHandler of the relevant method is responsible for calling the correct ResourceStore function. In practice, the community server has multiple resource stores chained together, each handling a specific part of the request and then calling the next store in the chain. The default configurations come with the following stores: MonitoringStore IndexRepresentationStore LockingResourceStore PatchingStore RepresentationConvertingStore DataAccessorBasedStore This chain can be seen in the configuration part in config/storage/middleware/default.json and all the entries in config/storage/backend . MonitoringStore \u00b6 This store emits the events that are necessary to emit notifications when resources change. There are 4 different events that can be emitted: this.emit('changed', identifier, activity) : is emitted for every resource that was changed/effected by a call to the store. With activity being undefined or one of the available ActivityStream terms. this.emit(AS.Create, identifier) : is emitted for every resource that was created by the call to the store. this.emit(AS.Update, identifier) : is emitted for every resource that was updated by the call to the store. this.emit(AS.Delete, identifier) : is emitted for every resource that was deleted by the call to the store. A changed event will always be emitted if a resource was changed. If the correct metadata was set by the source ResourceStore , an additional field will be sent along indicating the type of change, and an additional corresponding event will be emitted, depending on what the change is. IndexRepresentationStore \u00b6 When doing a GET request on a container /container/ , this container returns the contents of /container/index.html instead if HTML is the preferred response type. All these values are the defaults and can be configured for other resources and media types. LockingResourceStore \u00b6 To prevent data corruption, the server locks resources when being targeted by a request. Locks are only released when an operation is completely finished, in the case of read operations this means the entire data stream is read, and in the case of write operations this happens when all relevant data is written. The default lock that is used is a readers-writer lock. This allows simultaneous read requests on the same resource, but only while no write request is in progress. PatchingStore \u00b6 PATCH operations in Solid apply certain transformations on the target resource, which makes them more complicated than only reading or writing data since it involves both. The PatchingStore provides a generic solution for backends that do not implement the modifyResource function so new backends can be added more easily. In case the next store in the chain does not support PATCH, the PatchingStore will GET the data from the next store, apply the transformation on that data, and then PUT it back to the store. RepresentationConvertingStore \u00b6 This store handles everything related to content negotiation. In case the resulting data of a GET request does not match the preferences of a request, it will be converted here. Similarly, if incoming data does not match the type expected by the store, the SPARQL backend only accepts triples for example, that is also handled here DataAccessorBasedStore \u00b6 Large parts of the requirements of the Solid protocol specification are resolved by the DataAccessorBasedStore : POST only working on containers, DELETE not working on non-empty containers, generating ldp:contains triples for containers, etc. Most of this behaviour is independent of how the data is stored which is why it can be generalized here. The store's name comes from the fact that it makes use of DataAccessor s to handle the read/write of resources. A DataAccessor is a simple interface that only focuses on handling the data. It does not concern itself with any of the necessary Solid checks as it assumes those have already been made. This means that if a storage method needs to be supported, only a new DataAccessor needs to be made, after which it can be plugged into the rest of the server.","title":"Resource Store"},{"location":"architecture/features/protocol/resource-store/#resource-store","text":"The interface of a ResourceStore is mostly a 1-to-1 mapping of the HTTP methods: GET: getRepresentation PUT: setRepresentation POST: addResource DELETE: deleteResource PATCH: modifyResource The corresponding OperationHandler of the relevant method is responsible for calling the correct ResourceStore function. In practice, the community server has multiple resource stores chained together, each handling a specific part of the request and then calling the next store in the chain. The default configurations come with the following stores: MonitoringStore IndexRepresentationStore LockingResourceStore PatchingStore RepresentationConvertingStore DataAccessorBasedStore This chain can be seen in the configuration part in config/storage/middleware/default.json and all the entries in config/storage/backend .","title":"Resource store"},{"location":"architecture/features/protocol/resource-store/#monitoringstore","text":"This store emits the events that are necessary to emit notifications when resources change. There are 4 different events that can be emitted: this.emit('changed', identifier, activity) : is emitted for every resource that was changed/effected by a call to the store. With activity being undefined or one of the available ActivityStream terms. this.emit(AS.Create, identifier) : is emitted for every resource that was created by the call to the store. this.emit(AS.Update, identifier) : is emitted for every resource that was updated by the call to the store. this.emit(AS.Delete, identifier) : is emitted for every resource that was deleted by the call to the store. A changed event will always be emitted if a resource was changed. If the correct metadata was set by the source ResourceStore , an additional field will be sent along indicating the type of change, and an additional corresponding event will be emitted, depending on what the change is.","title":"MonitoringStore"},{"location":"architecture/features/protocol/resource-store/#indexrepresentationstore","text":"When doing a GET request on a container /container/ , this container returns the contents of /container/index.html instead if HTML is the preferred response type. All these values are the defaults and can be configured for other resources and media types.","title":"IndexRepresentationStore"},{"location":"architecture/features/protocol/resource-store/#lockingresourcestore","text":"To prevent data corruption, the server locks resources when being targeted by a request. Locks are only released when an operation is completely finished, in the case of read operations this means the entire data stream is read, and in the case of write operations this happens when all relevant data is written. The default lock that is used is a readers-writer lock. This allows simultaneous read requests on the same resource, but only while no write request is in progress.","title":"LockingResourceStore"},{"location":"architecture/features/protocol/resource-store/#patchingstore","text":"PATCH operations in Solid apply certain transformations on the target resource, which makes them more complicated than only reading or writing data since it involves both. The PatchingStore provides a generic solution for backends that do not implement the modifyResource function so new backends can be added more easily. In case the next store in the chain does not support PATCH, the PatchingStore will GET the data from the next store, apply the transformation on that data, and then PUT it back to the store.","title":"PatchingStore"},{"location":"architecture/features/protocol/resource-store/#representationconvertingstore","text":"This store handles everything related to content negotiation. In case the resulting data of a GET request does not match the preferences of a request, it will be converted here. Similarly, if incoming data does not match the type expected by the store, the SPARQL backend only accepts triples for example, that is also handled here","title":"RepresentationConvertingStore"},{"location":"architecture/features/protocol/resource-store/#dataaccessorbasedstore","text":"Large parts of the requirements of the Solid protocol specification are resolved by the DataAccessorBasedStore : POST only working on containers, DELETE not working on non-empty containers, generating ldp:contains triples for containers, etc. Most of this behaviour is independent of how the data is stored which is why it can be generalized here. The store's name comes from the fact that it makes use of DataAccessor s to handle the read/write of resources. A DataAccessor is a simple interface that only focuses on handling the data. It does not concern itself with any of the necessary Solid checks as it assumes those have already been made. This means that if a storage method needs to be supported, only a new DataAccessor needs to be made, after which it can be plugged into the rest of the server.","title":"DataAccessorBasedStore"},{"location":"contributing/making-changes/","text":"Pull requests \u00b6 The community server is fully written in Typescript . All changes should be done through pull requests . We recommend first discussing a possible solution in the relevant issue to reduce the amount of changes that will be requested. In case any of your changes are breaking, make sure you target the next major branch ( versions/x.0.0 ) instead of the main branch. Breaking changes include: changing interface/class signatures, potentially breaking external custom configurations, and breaking how internal data is stored. In case of doubt you probably want to target the next major branch. We make use of Conventional Commits . Don't forget to update the release notes when adding new major features. Also update any relevant documentation in case this is needed. When making changes to a pull request, we prefer to update the existing commits with a rebase instead of appending new commits, this way the PR can be rebased directly onto the target branch instead of needing to be squashed. There are strict requirements from the linter and the test coverage before a PR is valid. These are configured to run automatically when trying to commit to git. Although there are no tests for it (yet), we strongly advice documenting with TSdoc . If a list of entries is alphabetically sorted, such as index.ts , make sure it stays that way.","title":"Pull requests"},{"location":"contributing/making-changes/#pull-requests","text":"The community server is fully written in Typescript . All changes should be done through pull requests . We recommend first discussing a possible solution in the relevant issue to reduce the amount of changes that will be requested. In case any of your changes are breaking, make sure you target the next major branch ( versions/x.0.0 ) instead of the main branch. Breaking changes include: changing interface/class signatures, potentially breaking external custom configurations, and breaking how internal data is stored. In case of doubt you probably want to target the next major branch. We make use of Conventional Commits . Don't forget to update the release notes when adding new major features. Also update any relevant documentation in case this is needed. When making changes to a pull request, we prefer to update the existing commits with a rebase instead of appending new commits, this way the PR can be rebased directly onto the target branch instead of needing to be squashed. There are strict requirements from the linter and the test coverage before a PR is valid. These are configured to run automatically when trying to commit to git. Although there are no tests for it (yet), we strongly advice documenting with TSdoc . If a list of entries is alphabetically sorted, such as index.ts , make sure it stays that way.","title":"Pull requests"},{"location":"contributing/release/","text":"Releasing a new version \u00b6 This is only relevant if you are a developer with push access responsible for doing a new release. Steps to follow: Merge main into versions/x.0.0 . Verify if there are issues when upgrading an existing installation to the new version. Can the data still be accessed? Does authentication still work? Is there an issue upgrading any of the dependent repositories (see below for links)? None of the above has to be blocking per se, but should be noted in the release notes if relevant. Verify that the RELEASE_NOTES.md are correct. npm run release -- -r major Automatically updates Components.js references to the new version. Committed with chore(release): Update configs to vx.0.0 . Updates the package.json , and generates the new entries in CHANGELOG.md . Commited with chore(release): Release version vx.0.0 of the npm package Optionally run npx commit-and-tag-version -r major --dry-run to preview the commands that will be run and the changes to CHANGELOG.md . The postrelease script will now prompt you to manually edit the CHANGELOG.md . All entries are added in separate sections of the new release according to their commit prefixes. Re-organize the entries accordingly, referencing previous releases. Most of the entries in Chores and Documentation can be removed. Press any key in your terminal when your changes are ready. The postrelease script will amend the release commit, create an annotated tag and push changes to origin. Merge versions/x.0.0 into main and push. Do a GitHub release. npm publish Check if there is a next tag that needs to be replaced. Rename the versions/x.0.0 branch to the next version. Update .github/workflows/schedule.yml and .github/dependabot.yml to point at the new branch. Potentially upgrade dependent repositories: Recipes at https://github.com/CommunitySolidServer/recipes/ Tutorials at https://github.com/CommunitySolidServer/tutorials/ Changes when doing a pre-release of a major version: Version with npm run release -- -r major --prerelease alpha Do not merge versions/x.0.0 into main . Publish with npm publish --tag next . Do not update the branch or anything related.","title":"Releases"},{"location":"contributing/release/#releasing-a-new-version","text":"This is only relevant if you are a developer with push access responsible for doing a new release. Steps to follow: Merge main into versions/x.0.0 . Verify if there are issues when upgrading an existing installation to the new version. Can the data still be accessed? Does authentication still work? Is there an issue upgrading any of the dependent repositories (see below for links)? None of the above has to be blocking per se, but should be noted in the release notes if relevant. Verify that the RELEASE_NOTES.md are correct. npm run release -- -r major Automatically updates Components.js references to the new version. Committed with chore(release): Update configs to vx.0.0 . Updates the package.json , and generates the new entries in CHANGELOG.md . Commited with chore(release): Release version vx.0.0 of the npm package Optionally run npx commit-and-tag-version -r major --dry-run to preview the commands that will be run and the changes to CHANGELOG.md . The postrelease script will now prompt you to manually edit the CHANGELOG.md . All entries are added in separate sections of the new release according to their commit prefixes. Re-organize the entries accordingly, referencing previous releases. Most of the entries in Chores and Documentation can be removed. Press any key in your terminal when your changes are ready. The postrelease script will amend the release commit, create an annotated tag and push changes to origin. Merge versions/x.0.0 into main and push. Do a GitHub release. npm publish Check if there is a next tag that needs to be replaced. Rename the versions/x.0.0 branch to the next version. Update .github/workflows/schedule.yml and .github/dependabot.yml to point at the new branch. Potentially upgrade dependent repositories: Recipes at https://github.com/CommunitySolidServer/recipes/ Tutorials at https://github.com/CommunitySolidServer/tutorials/ Changes when doing a pre-release of a major version: Version with npm run release -- -r major --prerelease alpha Do not merge versions/x.0.0 into main . Publish with npm publish --tag next . Do not update the branch or anything related.","title":"Releasing a new version"},{"location":"usage/client-credentials/","text":"Automating authentication with Client Credentials \u00b6 One potential issue for scripts and other applications is that it requires user interaction to log in and authenticate. The CSS offers an alternative solution for such cases by making use of Client Credentials. Once you have created an account as described in the Identity Provider section , users can request a token that apps can use to authenticate without user input. All requests to the client credentials API currently require you to send along the email and password of that account to identify yourself. This is a temporary solution until the server has more advanced account management, after which this API will change. Below is example code of how to make use of these tokens. It makes use of several utility functions from the Solid Authentication Client . Note that the code below uses top-level await , which not all JavaScript engines support, so this should all be contained in an async function. Generating a token \u00b6 The code below generates a token linked to your account and WebID. This only needs to be done once, afterwards this token can be used for all future requests. import fetch from 'node-fetch' ; // This assumes your server is started under http://localhost:3000/. // This URL can also be found by checking the controls in JSON responses when interacting with the IDP API, // as described in the Identity Provider section. const response = await fetch ( 'http://localhost:3000/idp/credentials/' , { method : 'POST' , headers : { 'content-type' : 'application/json' }, // The email/password fields are those of your account. // The name field will be used when generating the ID of your token. body : JSON.stringify ({ email : 'my-email@example.com' , password : 'my-account-password' , name : 'my-token' }), }); // These are the identifier and secret of your token. // Store the secret somewhere safe as there is no way to request it again from the server! const { id , secret } = await response . json (); Requesting an Access token \u00b6 The ID and secret combination generated above can be used to request an Access Token from the server. This Access Token is only valid for a certain amount of time, after which a new one needs to be requested. import { createDpopHeader , generateDpopKeyPair } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // A key pair is needed for encryption. // This function from `solid-client-authn` generates such a pair for you. const dpopKey = await generateDpopKeyPair (); // These are the ID and secret generated in the previous step. // Both the ID and the secret need to be form-encoded. const authString = ` ${ encodeURIComponent ( id ) } : ${ encodeURIComponent ( secret ) } ` ; // This URL can be found by looking at the \"token_endpoint\" field at // http://localhost:3000/.well-known/openid-configuration // if your server is hosted at http://localhost:3000/. const tokenUrl = 'http://localhost:3000/.oidc/token' ; const response = await fetch ( tokenUrl , { method : 'POST' , headers : { // The header needs to be in base64 encoding. authorization : `Basic ${ Buffer . from ( authString ). toString ( 'base64' ) } ` , 'content-type' : 'application/x-www-form-urlencoded' , dpop : await createDpopHeader ( tokenUrl , 'POST' , dpopKey ), }, body : 'grant_type=client_credentials&scope=webid' , }); // This is the Access token that will be used to do an authenticated request to the server. // The JSON also contains an \"expires_in\" field in seconds, // which you can use to know when you need request a new Access token. const { access_token : accessToken } = await response . json (); Using the Access token to make an authenticated request \u00b6 Once you have an Access token, you can use it for authenticated requests until it expires. import { buildAuthenticatedFetch } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // The DPoP key needs to be the same key as the one used in the previous step. // The Access token is the one generated in the previous step. const authFetch = await buildAuthenticatedFetch ( fetch , accessToken , { dpopKey }); // authFetch can now be used as a standard fetch function that will authenticate as your WebID. // This request will do a simple GET for example. const response = await authFetch ( 'http://localhost:3000/private' ); Deleting a token \u00b6 You can see all your existing tokens by doing a POST to http://localhost:3000/idp/credentials/ with as body a JSON object containing your email and password. The response will be a JSON list containing all your tokens. Deleting a token requires also doing a POST to the same URL, but adding a delete key to the JSON input object with as value the ID of the token you want to remove.","title":"Client credentials"},{"location":"usage/client-credentials/#automating-authentication-with-client-credentials","text":"One potential issue for scripts and other applications is that it requires user interaction to log in and authenticate. The CSS offers an alternative solution for such cases by making use of Client Credentials. Once you have created an account as described in the Identity Provider section , users can request a token that apps can use to authenticate without user input. All requests to the client credentials API currently require you to send along the email and password of that account to identify yourself. This is a temporary solution until the server has more advanced account management, after which this API will change. Below is example code of how to make use of these tokens. It makes use of several utility functions from the Solid Authentication Client . Note that the code below uses top-level await , which not all JavaScript engines support, so this should all be contained in an async function.","title":"Automating authentication with Client Credentials"},{"location":"usage/client-credentials/#generating-a-token","text":"The code below generates a token linked to your account and WebID. This only needs to be done once, afterwards this token can be used for all future requests. import fetch from 'node-fetch' ; // This assumes your server is started under http://localhost:3000/. // This URL can also be found by checking the controls in JSON responses when interacting with the IDP API, // as described in the Identity Provider section. const response = await fetch ( 'http://localhost:3000/idp/credentials/' , { method : 'POST' , headers : { 'content-type' : 'application/json' }, // The email/password fields are those of your account. // The name field will be used when generating the ID of your token. body : JSON.stringify ({ email : 'my-email@example.com' , password : 'my-account-password' , name : 'my-token' }), }); // These are the identifier and secret of your token. // Store the secret somewhere safe as there is no way to request it again from the server! const { id , secret } = await response . json ();","title":"Generating a token"},{"location":"usage/client-credentials/#requesting-an-access-token","text":"The ID and secret combination generated above can be used to request an Access Token from the server. This Access Token is only valid for a certain amount of time, after which a new one needs to be requested. import { createDpopHeader , generateDpopKeyPair } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // A key pair is needed for encryption. // This function from `solid-client-authn` generates such a pair for you. const dpopKey = await generateDpopKeyPair (); // These are the ID and secret generated in the previous step. // Both the ID and the secret need to be form-encoded. const authString = ` ${ encodeURIComponent ( id ) } : ${ encodeURIComponent ( secret ) } ` ; // This URL can be found by looking at the \"token_endpoint\" field at // http://localhost:3000/.well-known/openid-configuration // if your server is hosted at http://localhost:3000/. const tokenUrl = 'http://localhost:3000/.oidc/token' ; const response = await fetch ( tokenUrl , { method : 'POST' , headers : { // The header needs to be in base64 encoding. authorization : `Basic ${ Buffer . from ( authString ). toString ( 'base64' ) } ` , 'content-type' : 'application/x-www-form-urlencoded' , dpop : await createDpopHeader ( tokenUrl , 'POST' , dpopKey ), }, body : 'grant_type=client_credentials&scope=webid' , }); // This is the Access token that will be used to do an authenticated request to the server. // The JSON also contains an \"expires_in\" field in seconds, // which you can use to know when you need request a new Access token. const { access_token : accessToken } = await response . json ();","title":"Requesting an Access token"},{"location":"usage/client-credentials/#using-the-access-token-to-make-an-authenticated-request","text":"Once you have an Access token, you can use it for authenticated requests until it expires. import { buildAuthenticatedFetch } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // The DPoP key needs to be the same key as the one used in the previous step. // The Access token is the one generated in the previous step. const authFetch = await buildAuthenticatedFetch ( fetch , accessToken , { dpopKey }); // authFetch can now be used as a standard fetch function that will authenticate as your WebID. // This request will do a simple GET for example. const response = await authFetch ( 'http://localhost:3000/private' );","title":"Using the Access token to make an authenticated request"},{"location":"usage/client-credentials/#deleting-a-token","text":"You can see all your existing tokens by doing a POST to http://localhost:3000/idp/credentials/ with as body a JSON object containing your email and password. The response will be a JSON list containing all your tokens. Deleting a token requires also doing a POST to the same URL, but adding a delete key to the JSON input object with as value the ID of the token you want to remove.","title":"Deleting a token"},{"location":"usage/example-requests/","text":"Interacting with the server \u00b6 PUT : Creating resources for a given URL \u00b6 Create a plain text file: curl -X PUT -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/myfile.txt Create a turtle file: curl -X PUT -H \"Content-Type: text/turtle\" \\ -d \" .\" \\ http://localhost:3000/myfile.ttl POST : Creating resources at a generated URL \u00b6 Create a plain text file: curl -X POST -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/ Create a turtle file: curl -X POST -H \"Content-Type: text/turtle\" \\ -d \" .\" \\ http://localhost:3000/ The response's Location header will contain the URL of the created resource. GET : Retrieving resources \u00b6 Retrieve a plain text file: curl -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt Retrieve a turtle file: curl -H \"Accept: text/turtle\" \\ http://localhost:3000/myfile.ttl Retrieve a turtle file in a different serialization: curl -H \"Accept: application/ld+json\" \\ http://localhost:3000/myfile.ttl DELETE : Deleting resources \u00b6 curl -X DELETE http://localhost:3000/myfile.txt PATCH : Modifying resources \u00b6 Modify a resource using N3 Patch : curl -X PATCH -H \"Content-Type: text/n3\" \\ --data-raw \"@prefix solid: . _:rename a solid:InsertDeletePatch; solid:inserts { . }.\" \\ http://localhost:3000/myfile.ttl Modify a resource using SPARQL Update : curl -X PATCH -H \"Content-Type: application/sparql-update\" \\ -d \"INSERT DATA { }\" \\ http://localhost:3000/myfile.ttl HEAD : Retrieve resources headers \u00b6 curl -I -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt OPTIONS : Retrieve resources communication options \u00b6 curl -X OPTIONS -i http://localhost:3000/myfile.txt","title":"Example request"},{"location":"usage/example-requests/#interacting-with-the-server","text":"","title":"Interacting with the server"},{"location":"usage/example-requests/#put-creating-resources-for-a-given-url","text":"Create a plain text file: curl -X PUT -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/myfile.txt Create a turtle file: curl -X PUT -H \"Content-Type: text/turtle\" \\ -d \" .\" \\ http://localhost:3000/myfile.ttl","title":"PUT: Creating resources for a given URL"},{"location":"usage/example-requests/#post-creating-resources-at-a-generated-url","text":"Create a plain text file: curl -X POST -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/ Create a turtle file: curl -X POST -H \"Content-Type: text/turtle\" \\ -d \" .\" \\ http://localhost:3000/ The response's Location header will contain the URL of the created resource.","title":"POST: Creating resources at a generated URL"},{"location":"usage/example-requests/#get-retrieving-resources","text":"Retrieve a plain text file: curl -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt Retrieve a turtle file: curl -H \"Accept: text/turtle\" \\ http://localhost:3000/myfile.ttl Retrieve a turtle file in a different serialization: curl -H \"Accept: application/ld+json\" \\ http://localhost:3000/myfile.ttl","title":"GET: Retrieving resources"},{"location":"usage/example-requests/#delete-deleting-resources","text":"curl -X DELETE http://localhost:3000/myfile.txt","title":"DELETE: Deleting resources"},{"location":"usage/example-requests/#patch-modifying-resources","text":"Modify a resource using N3 Patch : curl -X PATCH -H \"Content-Type: text/n3\" \\ --data-raw \"@prefix solid: . _:rename a solid:InsertDeletePatch; solid:inserts { . }.\" \\ http://localhost:3000/myfile.ttl Modify a resource using SPARQL Update : curl -X PATCH -H \"Content-Type: application/sparql-update\" \\ -d \"INSERT DATA { }\" \\ http://localhost:3000/myfile.ttl","title":"PATCH: Modifying resources"},{"location":"usage/example-requests/#head-retrieve-resources-headers","text":"curl -I -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt","title":"HEAD: Retrieve resources headers"},{"location":"usage/example-requests/#options-retrieve-resources-communication-options","text":"curl -X OPTIONS -i http://localhost:3000/myfile.txt","title":"OPTIONS: Retrieve resources communication options"},{"location":"usage/identity-provider/","text":"Identity Provider \u00b6 Besides implementing the Solid protocol , the community server can also be an Identity Provider (IDP), officially known as an OpenID Provider (OP), following the Solid OIDC spec as much as possible. It is recommended to use the latest version of the Solid authentication client to interact with the server. The links here assume the server is hosted at http://localhost:3000/ . Registering an account \u00b6 To register an account, you can go to http://localhost:3000/idp/register/ if this feature is enabled, which it is on all configurations we provide. Currently our registration page ties 3 features together on the same page: Creating an account on the server. Creating or linking a WebID to your account. Creating a pod on the server. Account \u00b6 To create an account you need to provide an email address and password. The password will be salted and hashed before being stored. As of now, the account is only used to log in and identify yourself to the IDP when you want to do an authenticated request, but in future the plan is to also use this for account/pod management. WebID \u00b6 We require each account to have a corresponding WebID. You can either let the server create a WebID for you in a pod, which will also need to be created then, or you can link an already existing WebID you have on an external server. In case you try to link your own WebID, you can choose if you want to be able to use this server as your IDP for this WebID. If not, you can still create a pod, but you will not be able to direct the authentication client to this server to identify yourself. Additionally, if you try to register with an external WebID, the first attempt will return an error indicating you need to add an identification triple to your WebID. After doing that you can try to register again. This is how we verify you are the owner of that WebID. After registration the next page will inform you that you have to add an additional triple to your WebID if you want to use the server as your IDP. All of the above is automated if you create the WebID on the server itself. Pod \u00b6 To create a pod you simply have to fill in the name you want your pod to have. This will then be used to generate the full URL of your pod. For example, if you choose the name test , your pod would be located at http://localhost:3000/test/ and your generated WebID would be http://localhost:3000/test/profile/card#me . The generated name also depends on the configuration you chose for your server. If you are using the subdomain feature, such as being done in the config/memory-subdomains.json configuration, the generated pod URL would be http://test.localhost:3000/ . Logging in \u00b6 When using an authenticating client, you will be redirected to a login screen asking for your email and password. After that you will be redirected to a page showing some basic information about the client. There you need to consent that this client is allowed to identify using your WebID. As a result the server will send a token back to the client that contains all the information needed to use your WebID. Forgot password \u00b6 If you forgot your password, you can recover it by going to http://localhost:3000/idp/forgotpassword/ . There you can enter your email address to get a recovery mail to reset your password. This feature only works if a mail server was configured, which by default is not the case. JSON API \u00b6 All of the above happens through HTML pages provided by the server. By default, the server uses the templates found in /templates/identity/email-password/ but different templates can be used through configuration. These templates all make use of a JSON API exposed by the server. For example, when doing a GET request to http://localhost:3000/idp/register/ with a JSON accept header, the following JSON is returned: { \"required\" : { \"email\" : \"string\" , \"password\" : \"string\" , \"confirmPassword\" : \"string\" , \"createWebId\" : \"boolean\" , \"register\" : \"boolean\" , \"createPod\" : \"boolean\" , \"rootPod\" : \"boolean\" }, \"optional\" : { \"webId\" : \"string\" , \"podName\" : \"string\" , \"template\" : \"string\" }, \"controls\" : { \"register\" : \"http://localhost:3000/idp/register/\" , \"index\" : \"http://localhost:3000/idp/\" , \"prompt\" : \"http://localhost:3000/idp/prompt/\" , \"login\" : \"http://localhost:3000/idp/login/\" , \"forgotPassword\" : \"http://localhost:3000/idp/forgotpassword/\" }, \"apiVersion\" : \"0.3\" } The required and optional fields indicate which input fields are expected by the API. These correspond to the fields of the HTML registration page. To register a user, you can do a POST request with a JSON body containing the correct fields: { \"email\" : \"test@example.com\" , \"password\" : \"secret\" , \"confirmPassword\" : \"secret\" , \"createWebId\" : true , \"register\" : true , \"createPod\" : true , \"rootPod\" : false , \"podName\" : \"test\" } Two fields here that are not covered on the HTML page above are rootPod and template . rootPod tells the server to put the pod in the root of the server instead of a location based on the podName . By default the server will reject requests where this is true , except during setup. template is only used by servers running the config/dynamic.json configuration, which is a very custom setup where every pod can have a different Components.js configuration, so this value can usually be ignored. IDP configuration \u00b6 The above descriptions cover server behaviour with most default configurations, but just like any other feature, there are several features that can be changed through the imports in your configuration file. All available options can be found in the config/identity/ folder . Below we go a bit deeper into the available options access \u00b6 The access option allows you to set authorization restrictions on the IDP API when enabled, similar to how authorization works on the LDP requests on the server. For example, if the server uses WebACL as authorization scheme, you can put a .acl resource in the /idp/register/ container to restrict who is allowed to access the registration API. Note that for everything to work there needs to be a .acl resource in /idp/ when using WebACL so resources can be accessed as usual when the server starts up. Make sure you change the permissions on /idp/.acl so not everyone can modify those. All of the above is only relevant if you use the restricted.json setting for this import. When you use public.json the API will simply always be accessible by everyone. email \u00b6 In case you want users to be able to reset their password when they forget it, you will need to tell the server which email server to use to send reset mails. example.json contains an example of what this looks like, which you will need to copy over to your base configuration and then remove the config/identity/email import. handler \u00b6 There is only one option here. This import contains all the core components necessary to make the IDP work. In case you need to make some changes to core IDP settings, this is where you would have to look. pod \u00b6 The pod options determines how pods are created. static.json is the expected pod behaviour as described above. dynamic.json is an experimental feature that allows users to have a custom Components.js configuration for their own pod. When using such a setup, a JSON file will be written containing all the information of the user pods so they can be recreated when the server restarts. registration \u00b6 This setting allows you to enable/disable registration on the server. Disabling registration here does not disable registration during setup, meaning you can still use this server as an IDP with the account created there.","title":"Identity provider"},{"location":"usage/identity-provider/#identity-provider","text":"Besides implementing the Solid protocol , the community server can also be an Identity Provider (IDP), officially known as an OpenID Provider (OP), following the Solid OIDC spec as much as possible. It is recommended to use the latest version of the Solid authentication client to interact with the server. The links here assume the server is hosted at http://localhost:3000/ .","title":"Identity Provider"},{"location":"usage/identity-provider/#registering-an-account","text":"To register an account, you can go to http://localhost:3000/idp/register/ if this feature is enabled, which it is on all configurations we provide. Currently our registration page ties 3 features together on the same page: Creating an account on the server. Creating or linking a WebID to your account. Creating a pod on the server.","title":"Registering an account"},{"location":"usage/identity-provider/#account","text":"To create an account you need to provide an email address and password. The password will be salted and hashed before being stored. As of now, the account is only used to log in and identify yourself to the IDP when you want to do an authenticated request, but in future the plan is to also use this for account/pod management.","title":"Account"},{"location":"usage/identity-provider/#webid","text":"We require each account to have a corresponding WebID. You can either let the server create a WebID for you in a pod, which will also need to be created then, or you can link an already existing WebID you have on an external server. In case you try to link your own WebID, you can choose if you want to be able to use this server as your IDP for this WebID. If not, you can still create a pod, but you will not be able to direct the authentication client to this server to identify yourself. Additionally, if you try to register with an external WebID, the first attempt will return an error indicating you need to add an identification triple to your WebID. After doing that you can try to register again. This is how we verify you are the owner of that WebID. After registration the next page will inform you that you have to add an additional triple to your WebID if you want to use the server as your IDP. All of the above is automated if you create the WebID on the server itself.","title":"WebID"},{"location":"usage/identity-provider/#pod","text":"To create a pod you simply have to fill in the name you want your pod to have. This will then be used to generate the full URL of your pod. For example, if you choose the name test , your pod would be located at http://localhost:3000/test/ and your generated WebID would be http://localhost:3000/test/profile/card#me . The generated name also depends on the configuration you chose for your server. If you are using the subdomain feature, such as being done in the config/memory-subdomains.json configuration, the generated pod URL would be http://test.localhost:3000/ .","title":"Pod"},{"location":"usage/identity-provider/#logging-in","text":"When using an authenticating client, you will be redirected to a login screen asking for your email and password. After that you will be redirected to a page showing some basic information about the client. There you need to consent that this client is allowed to identify using your WebID. As a result the server will send a token back to the client that contains all the information needed to use your WebID.","title":"Logging in"},{"location":"usage/identity-provider/#forgot-password","text":"If you forgot your password, you can recover it by going to http://localhost:3000/idp/forgotpassword/ . There you can enter your email address to get a recovery mail to reset your password. This feature only works if a mail server was configured, which by default is not the case.","title":"Forgot password"},{"location":"usage/identity-provider/#json-api","text":"All of the above happens through HTML pages provided by the server. By default, the server uses the templates found in /templates/identity/email-password/ but different templates can be used through configuration. These templates all make use of a JSON API exposed by the server. For example, when doing a GET request to http://localhost:3000/idp/register/ with a JSON accept header, the following JSON is returned: { \"required\" : { \"email\" : \"string\" , \"password\" : \"string\" , \"confirmPassword\" : \"string\" , \"createWebId\" : \"boolean\" , \"register\" : \"boolean\" , \"createPod\" : \"boolean\" , \"rootPod\" : \"boolean\" }, \"optional\" : { \"webId\" : \"string\" , \"podName\" : \"string\" , \"template\" : \"string\" }, \"controls\" : { \"register\" : \"http://localhost:3000/idp/register/\" , \"index\" : \"http://localhost:3000/idp/\" , \"prompt\" : \"http://localhost:3000/idp/prompt/\" , \"login\" : \"http://localhost:3000/idp/login/\" , \"forgotPassword\" : \"http://localhost:3000/idp/forgotpassword/\" }, \"apiVersion\" : \"0.3\" } The required and optional fields indicate which input fields are expected by the API. These correspond to the fields of the HTML registration page. To register a user, you can do a POST request with a JSON body containing the correct fields: { \"email\" : \"test@example.com\" , \"password\" : \"secret\" , \"confirmPassword\" : \"secret\" , \"createWebId\" : true , \"register\" : true , \"createPod\" : true , \"rootPod\" : false , \"podName\" : \"test\" } Two fields here that are not covered on the HTML page above are rootPod and template . rootPod tells the server to put the pod in the root of the server instead of a location based on the podName . By default the server will reject requests where this is true , except during setup. template is only used by servers running the config/dynamic.json configuration, which is a very custom setup where every pod can have a different Components.js configuration, so this value can usually be ignored.","title":"JSON API"},{"location":"usage/identity-provider/#idp-configuration","text":"The above descriptions cover server behaviour with most default configurations, but just like any other feature, there are several features that can be changed through the imports in your configuration file. All available options can be found in the config/identity/ folder . Below we go a bit deeper into the available options","title":"IDP configuration"},{"location":"usage/identity-provider/#access","text":"The access option allows you to set authorization restrictions on the IDP API when enabled, similar to how authorization works on the LDP requests on the server. For example, if the server uses WebACL as authorization scheme, you can put a .acl resource in the /idp/register/ container to restrict who is allowed to access the registration API. Note that for everything to work there needs to be a .acl resource in /idp/ when using WebACL so resources can be accessed as usual when the server starts up. Make sure you change the permissions on /idp/.acl so not everyone can modify those. All of the above is only relevant if you use the restricted.json setting for this import. When you use public.json the API will simply always be accessible by everyone.","title":"access"},{"location":"usage/identity-provider/#email","text":"In case you want users to be able to reset their password when they forget it, you will need to tell the server which email server to use to send reset mails. example.json contains an example of what this looks like, which you will need to copy over to your base configuration and then remove the config/identity/email import.","title":"email"},{"location":"usage/identity-provider/#handler","text":"There is only one option here. This import contains all the core components necessary to make the IDP work. In case you need to make some changes to core IDP settings, this is where you would have to look.","title":"handler"},{"location":"usage/identity-provider/#pod_1","text":"The pod options determines how pods are created. static.json is the expected pod behaviour as described above. dynamic.json is an experimental feature that allows users to have a custom Components.js configuration for their own pod. When using such a setup, a JSON file will be written containing all the information of the user pods so they can be recreated when the server restarts.","title":"pod"},{"location":"usage/identity-provider/#registration","text":"This setting allows you to enable/disable registration on the server. Disabling registration here does not disable registration during setup, meaning you can still use this server as an IDP with the account created there.","title":"registration"},{"location":"usage/metadata/","text":"Editing metadata of resources \u00b6 What is a description resource \u00b6 Description resources contain auxiliary information about a resource. In CSS, these represent metadata corresponding to that resource. Every resource always has a corresponding description resource and therefore description resources can not be created or deleted directly. Description resources are discoverable by interacting with their subject resource: the response to a GET or HEAD request on a subject resource will contain a describedby Link Header with a URL that points to its description resource. Clients should always follow this link rather than guessing its URL, because the Solid Protocol does not mandate a specific description resource URL. The default CSS configurations use as a convention that http://example.org/resource has http://example.org/resource.meta as its description resource. How to edit the metadata of a resource \u00b6 Editing the metadata of a resource is performed by editing the description resource directly. This can only be done using PATCH requests (see example workflow ). PUT requests on description resources are not allowed, because they would replace the entire resource state, whereas some metadata is protected or generated by the server. Similarly, DELETE on description resources is not allowed because a resource will always have some metadata (e.g. rdf:type ). Instead, the lifecycle of description resources is managed by the server. Protected metadata \u00b6 Some metadata is managed by the server and can not be modified directly, such as the last modified date. The CSS will throw an error (409 ConflictHttpError ) when trying to change this protected metadata. Preserving metadata \u00b6 PUT requests on a resource will reset the description resource. There is however a way to keep the contents of description resource prior to the PUT request: adding the HTTP Link header targeting the description resource with rel=\"preserve\" . When the resource URL is http://localhost:3000/foobar , preserving its description resource when updating its contents can be achieved like in the following example: curl -X PUT 'http://localhost:3000/foobar' \\ -H 'Content-Type: text/turtle' \\ -H 'Link: ;rel=\"preserve\"' \\ -d \" .\" Impact on creating containers \u00b6 When creating a container the input body is ignored and performing a PUT request on an existing container will result in an error. Container metadata can only be added and modified by performing a PATCH on the description resource, similarly to documents. This is done to clearly differentiate between a container's representation and its metadata. Example of a workflow for editing a description resource \u00b6 In this example, we add an inbox description to http://localhost:3000/foo/ . This allows discovery of the ldp:inbox as described in the Linked Data Notifications specification . We have started the CSS with the default configuration and have already created an inbox at http://localhost:3000/inbox/ . Since we don't know the location of the description resource, we first send a HEAD request to the resource to obtain the URL of its description resource. curl --head 'http://localhost:3000/foo/' which will produce a response with at least these headers: HTTP/1.1 200 OK Link: ; rel = \"describedby\" Now that we have the URL of the description resource, we create a patch for adding the inbox in the description of the resource. curl -X PATCH 'http://localhost:3000/foo/.meta' \\ -H 'Content-Type: text/n3' \\ --data-raw '@prefix solid: . <> a solid:InsertDeletePatch; solid:inserts { . }.' After this update, we can verify that the inbox is added by performing a GET request to the description resource curl 'http://localhost:3000/foo/.meta' With as result for the body @prefix dc: . @prefix ldp: . @prefix posix: . @prefix xsd: . a ldp : Container , ldp : BasicContainer , ldp : Resource ; dc : modified \"2022-06-09T08:17:07.000Z\" ^^ xsd : dateTime ; ldp : inbox ;. This can also be verified by sending a GET request to the subject resource itself. The inbox location can also be found in the Link headers. curl -v 'http://localhost:3000/foo/' HTTP/1.1 200 OK Link: ; rel = \"http://www.w3.org/ns/ldp#inbox\"","title":"Metadata"},{"location":"usage/metadata/#editing-metadata-of-resources","text":"","title":"Editing metadata of resources"},{"location":"usage/metadata/#what-is-a-description-resource","text":"Description resources contain auxiliary information about a resource. In CSS, these represent metadata corresponding to that resource. Every resource always has a corresponding description resource and therefore description resources can not be created or deleted directly. Description resources are discoverable by interacting with their subject resource: the response to a GET or HEAD request on a subject resource will contain a describedby Link Header with a URL that points to its description resource. Clients should always follow this link rather than guessing its URL, because the Solid Protocol does not mandate a specific description resource URL. The default CSS configurations use as a convention that http://example.org/resource has http://example.org/resource.meta as its description resource.","title":"What is a description resource"},{"location":"usage/metadata/#how-to-edit-the-metadata-of-a-resource","text":"Editing the metadata of a resource is performed by editing the description resource directly. This can only be done using PATCH requests (see example workflow ). PUT requests on description resources are not allowed, because they would replace the entire resource state, whereas some metadata is protected or generated by the server. Similarly, DELETE on description resources is not allowed because a resource will always have some metadata (e.g. rdf:type ). Instead, the lifecycle of description resources is managed by the server.","title":"How to edit the metadata of a resource"},{"location":"usage/metadata/#protected-metadata","text":"Some metadata is managed by the server and can not be modified directly, such as the last modified date. The CSS will throw an error (409 ConflictHttpError ) when trying to change this protected metadata.","title":"Protected metadata"},{"location":"usage/metadata/#preserving-metadata","text":"PUT requests on a resource will reset the description resource. There is however a way to keep the contents of description resource prior to the PUT request: adding the HTTP Link header targeting the description resource with rel=\"preserve\" . When the resource URL is http://localhost:3000/foobar , preserving its description resource when updating its contents can be achieved like in the following example: curl -X PUT 'http://localhost:3000/foobar' \\ -H 'Content-Type: text/turtle' \\ -H 'Link: ;rel=\"preserve\"' \\ -d \" .\"","title":"Preserving metadata"},{"location":"usage/metadata/#impact-on-creating-containers","text":"When creating a container the input body is ignored and performing a PUT request on an existing container will result in an error. Container metadata can only be added and modified by performing a PATCH on the description resource, similarly to documents. This is done to clearly differentiate between a container's representation and its metadata.","title":"Impact on creating containers"},{"location":"usage/metadata/#example-of-a-workflow-for-editing-a-description-resource","text":"In this example, we add an inbox description to http://localhost:3000/foo/ . This allows discovery of the ldp:inbox as described in the Linked Data Notifications specification . We have started the CSS with the default configuration and have already created an inbox at http://localhost:3000/inbox/ . Since we don't know the location of the description resource, we first send a HEAD request to the resource to obtain the URL of its description resource. curl --head 'http://localhost:3000/foo/' which will produce a response with at least these headers: HTTP/1.1 200 OK Link: ; rel = \"describedby\" Now that we have the URL of the description resource, we create a patch for adding the inbox in the description of the resource. curl -X PATCH 'http://localhost:3000/foo/.meta' \\ -H 'Content-Type: text/n3' \\ --data-raw '@prefix solid: . <> a solid:InsertDeletePatch; solid:inserts { . }.' After this update, we can verify that the inbox is added by performing a GET request to the description resource curl 'http://localhost:3000/foo/.meta' With as result for the body @prefix dc: . @prefix ldp: . @prefix posix: . @prefix xsd: . a ldp : Container , ldp : BasicContainer , ldp : Resource ; dc : modified \"2022-06-09T08:17:07.000Z\" ^^ xsd : dateTime ; ldp : inbox ;. This can also be verified by sending a GET request to the subject resource itself. The inbox location can also be found in the Link headers. curl -v 'http://localhost:3000/foo/' HTTP/1.1 200 OK Link: ; rel = \"http://www.w3.org/ns/ldp#inbox\"","title":"Example of a workflow for editing a description resource"},{"location":"usage/seeding-pods/","text":"How to seed Accounts and Pods \u00b6 If you need to seed accounts and pods, the --seededPodConfigJson command line option can be used with as value the path to a JSON file containing configurations for every required pod. The file needs to contain an array of JSON objects, with each object containing at least a podName , email , and password field. For example: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" } ] You may optionally specify other parameters as described in the Identity Provider documentation . For example, to set up a pod without registering the generated WebID with the Identity Provider: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" , \"webId\" : \"https://id.inrupt.com/example\" , \"register\" : false } ] This feature cannot be used to register pods with pre-existing WebIDs, which requires an interactive validation step.","title":"Seeding pods"},{"location":"usage/seeding-pods/#how-to-seed-accounts-and-pods","text":"If you need to seed accounts and pods, the --seededPodConfigJson command line option can be used with as value the path to a JSON file containing configurations for every required pod. The file needs to contain an array of JSON objects, with each object containing at least a podName , email , and password field. For example: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" } ] You may optionally specify other parameters as described in the Identity Provider documentation . For example, to set up a pod without registering the generated WebID with the Identity Provider: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" , \"webId\" : \"https://id.inrupt.com/example\" , \"register\" : false } ] This feature cannot be used to register pods with pre-existing WebIDs, which requires an interactive validation step.","title":"How to seed Accounts and Pods"}]} \ No newline at end of file diff --git a/5.x/sitemap.xml b/5.x/sitemap.xml index 6146159f5..b0a26ab6f 100644 --- a/5.x/sitemap.xml +++ b/5.x/sitemap.xml @@ -2,92 +2,92 @@ https://communitysolidserver.github.io/CommunitySolidServer/5.x/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/architecture/core/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/architecture/dependency-injection/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/architecture/overview/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/architecture/features/cli/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/architecture/features/http-handler/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/architecture/features/initialization/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/architecture/features/protocol/authorization/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/architecture/features/protocol/overview/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/architecture/features/protocol/parsing/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/architecture/features/protocol/resource-store/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/contributing/making-changes/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/contributing/release/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/usage/client-credentials/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/usage/example-requests/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/usage/identity-provider/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/usage/metadata/ - 2022-08-22 + 2022-08-25 daily https://communitysolidserver.github.io/CommunitySolidServer/5.x/usage/seeding-pods/ - 2022-08-22 + 2022-08-25 daily \ No newline at end of file diff --git a/5.x/sitemap.xml.gz b/5.x/sitemap.xml.gz index b0079aeb31375a3d49be066b53a09449bf4e4038..6310e50918bae9e68badcf8d7fe704ae83b954da 100644 GIT binary patch literal 426 zcmV;b0agAViwFoXMh9a8|8r?{Wo=<_E_iKh0Ns~eZsQ;j$KQL3$nThJq)P3|aULr5 z0_`4P3R9ad2A3J~v8V5lbfa8gS28>pGmQS8pZOT*cE`!D_L#`Uux);{>t=<)MC-z^ zZGOEztA}RS-*q{8LSBh;plvfU&p)+391blx#tCL>V;VG>Q5vMbje6ZaHvQd7N)foc zolEDV>5B@9jEU2fgX0rL?<`@GaN5D~nBQ6#_0!+(m(u-m?$-D1QFpqi+=>G66m7p* zZ#HWEpf>m7@@>0uGb4ln-xGfJ7MwqI`o?mJR$s&Y7lA`Eqx>PmIiZNiqYe~{GX^Wu z=cHWt5TnegO3%%V16LTPk%4fhvJY=n-haLzD>y zEZG_ZE0a)8StSwB(uwd0)9g{nS2GeXp2wXBhEGa1iM=$^(t##qu)wf%4nizpWD- z1=1W~0u!4SgToB_k<)kBO;>w?L^3=WGmQS8pZOT*AC9x%>@ksxVb^?Xx6KBFiPnX2 z*F67zRQJupaM$PL33(&Vfp*Qry!_Pqa5%K&7-v|hjcL?mCTWoVFzRjlqZ#ftQi{Ok z?Ob{v%}`WGWK5jr92}o0dS?lfgwu|Wr~J~osDJ(Mek$Eh=WhMIJ?dT;m0M9Do}(SQ zZP%&oz3Mt~`LNx%nF+#(uL<7<3(lW

            Below is example code of how to make use of these tokens. -It makes use of several utility functions from the +It makes use of several utility functions from the Solid Authentication Client. Note that the code below uses top-level await, which not all JavaScript engines support, so this should all be contained in an async function.

            @@ -1036,7 +1036,7 @@ This Access Token is only valid for a certain amount of time, after which a new }); // This is the Access token that will be used to do an authenticated request to the server. -// The JSON also contains an "expires_in" field in seconds, +// The JSON also contains an "expires_in" field in seconds, // which you can use to know when you need request a new Access token. const { access_token: accessToken } = await response.json();
    diff --git a/5.x/usage/example-requests/index.html b/5.x/usage/example-requests/index.html index f04564d38..a5a6080c1 100644 --- a/5.x/usage/example-requests/index.html +++ b/5.x/usage/example-requests/index.html @@ -426,6 +426,8 @@ + +
    +

    Create a turtle file:

    curl -X PUT -H "Content-Type: text/turtle" \
       -d "<ex:s> <ex:p> <ex:o>." \
       http://localhost:3000/myfile.ttl
    -

    -

    POST: Creating resources at a generated URL

    -

    Create a plain text file: +

    +

    POST: Creating resources at a generated URL

    +

    Create a plain text file:

    curl -X POST -H "Content-Type: text/plain" \
       -d "abc" \
       http://localhost:3000/
    -

    -

    Create a turtle file: +

    +

    Create a turtle file:

    curl -X POST -H "Content-Type: text/turtle" \
       -d "<ex:s> <ex:p> <ex:o>." \
       http://localhost:3000/
    -

    +

    The response's Location header will contain the URL of the created resource.

    -

    GET: Retrieving resources

    -

    Retrieve a plain text file: +

    GET: Retrieving resources

    +

    Retrieve a plain text file:

    curl -H "Accept: text/plain" \
       http://localhost:3000/myfile.txt
    -

    -

    Retrieve a turtle file: + +

    Retrieve a turtle file:

    curl -H "Accept: text/turtle" \
       http://localhost:3000/myfile.ttl
    -

    -

    Retrieve a turtle file in a different serialization: + +

    Retrieve a turtle file in a different serialization:

    curl -H "Accept: application/ld+json" \
       http://localhost:3000/myfile.ttl
    -

    -

    DELETE: Deleting resources

    + +

    DELETE: Deleting resources

    curl -X DELETE http://localhost:3000/myfile.txt
     
    -

    PATCH: Modifying resources

    +

    PATCH: Modifying resources

    Modify a resource using N3 Patch:

    curl -X PATCH -H "Content-Type: text/n3" \
       --data-raw "@prefix solid: <http://www.w3.org/ns/solid/terms#>. _:rename a solid:InsertDeletePatch; solid:inserts { <ex:s2> <ex:p2> <ex:o2>. }." \
    @@ -1088,11 +1066,11 @@
       -d "INSERT DATA { <ex:s2> <ex:p2> <ex:o2> }" \
       http://localhost:3000/myfile.ttl
     
    -

    HEAD: Retrieve resources headers

    +

    HEAD: Retrieve resources headers

    curl -I -H "Accept: text/plain" \
       http://localhost:3000/myfile.txt
     
    -

    OPTIONS: Retrieve resources communication options

    +

    OPTIONS: Retrieve resources communication options

    curl -X OPTIONS -i http://localhost:3000/myfile.txt
     
    diff --git a/5.x/usage/identity-provider/index.html b/5.x/usage/identity-provider/index.html index 98800d02d..a6760f24b 100644 --- a/5.x/usage/identity-provider/index.html +++ b/5.x/usage/identity-provider/index.html @@ -1127,7 +1127,7 @@

    Besides implementing the Solid protocol, the community server can also be an Identity Provider (IDP), officially known as an OpenID Provider (OP), following the Solid OIDC spec as much as possible.

    -

    It is recommended to use the latest version +

    It is recommended to use the latest version of the Solid authentication client to interact with the server.

    The links here assume the server is hosted at http://localhost:3000/.

    @@ -1151,7 +1151,7 @@ but in future the plan is to also use this for account/pod management.

    You can either let the server create a WebID for you in a pod, which will also need to be created then, or you can link an already existing WebID you have on an external server.

    -

    In case you try to link your own WebID, you can choose if you want to be able +

    In case you try to link your own WebID, you can choose if you want to be able to use this server as your IDP for this WebID. If not, you can still create a pod, but you will not be able to direct the authentication client to this server to identify yourself.

    @@ -1169,7 +1169,7 @@ For example, if you choose the name test, your pod would be located at http://localhost:3000/test/ and your generated WebID would be http://localhost:3000/test/profile/card#me.

    The generated name also depends on the configuration you chose for your server. -If you are using the subdomain feature, +If you are using the subdomain feature, such as being done in the config/memory-subdomains.json configuration, the generated pod URL would be http://test.localhost:3000/.

    Logging in

    @@ -1190,7 +1190,7 @@ By default, the server uses the templates found in /templates/identity/ema but different templates can be used through configuration.

    These templates all make use of a JSON API exposed by the server. For example, when doing a GET request to http://localhost:3000/idp/register/ -with a JSON accept header, the following JSON is returned: +with a JSON accept header, the following JSON is returned:

    {
       "required": {
         "email": "string",
    @@ -1216,9 +1216,9 @@ with a JSON accept header, the following JSON is returned:
       "apiVersion": "0.3"
     }
     
    -The required and optional fields indicate which input fields are expected by the API. +

    The required and optional fields indicate which input fields are expected by the API. These correspond to the fields of the HTML registration page. -To register a user, you can do a POST request with a JSON body containing the correct fields: +To register a user, you can do a POST request with a JSON body containing the correct fields:

    {
       "email": "test@example.com",
       "password": "secret",
    @@ -1230,7 +1230,7 @@ To register a user, you can do a POST request with a JSON body containing the co
       "podName": "test"
     }
     
    -Two fields here that are not covered on the HTML page above are rootPod and template. +

    Two fields here that are not covered on the HTML page above are rootPod and template. rootPod tells the server to put the pod in the root of the server instead of a location based on the podName. By default the server will reject requests where this is true, except during setup. template is only used by servers running the config/dynamic.json configuration, @@ -1240,7 +1240,7 @@ so this value can usually be ignored.

    The above descriptions cover server behaviour with most default configurations, but just like any other feature, there are several features that can be changed through the imports in your configuration file.

    -

    All available options can be found in +

    All available options can be found in the config/identity/ folder. Below we go a bit deeper into the available options

    access

    diff --git a/5.x/usage/metadata/index.html b/5.x/usage/metadata/index.html index 4845b448c..d96070abd 100644 --- a/5.x/usage/metadata/index.html +++ b/5.x/usage/metadata/index.html @@ -1029,7 +1029,7 @@ The default CSS configurations use as a convention that http://example.org has http://example.org/resource.meta as its description resource.

    How to edit the metadata of a resource

    Editing the metadata of a resource is performed by editing the description resource directly. -This can only be done using PATCH requests +This can only be done using PATCH requests (see example workflow).

    PUT requests on description resources are not allowed, because they would replace the entire resource state, @@ -1041,14 +1041,15 @@ Instead, the lifecycle of description resources is managed by the server.

    Some metadata is managed by the server and can not be modified directly, such as the last modified date. The CSS will throw an error (409 ConflictHttpError) when trying to change this protected metadata.

    Preserving metadata

    -

    PUT requests on a resource will reset the description resource. +

    PUT requests on a resource will reset the description resource. There is however a way to keep the contents of description resource prior to the PUT request: adding the HTTP Link header targeting the description resource with rel="preserve".

    -

    When the resource URL is http://localhost:3000/foobar, preserving its description resource when updating its contents can be achieved like in the following example:

    +

    When the resource URL is http://localhost:3000/foobar, preserving its description resource when updating its contents +can be achieved like in the following example:

    curl -X PUT 'http://localhost:3000/foobar' \
     -H 'Content-Type: text/turtle' \
     -H 'Link: <http://localhost:3000/foobar.meta>;rel="preserve"' \
    --d "<ex:s> <ex:p> <ex:o>." 
    +-d "<ex:s> <ex:p> <ex:o>."
     

    Impact on creating containers

    When creating a container the input body is ignored @@ -1066,10 +1067,10 @@ we first send a HEAD request to the resource to obtain the URL of i

    curl --head 'http://localhost:3000/foo/'
     

    which will produce a response with at least these headers:

    -

    HTTP/1.1 200 OK
    +
    HTTP/1.1 200 OK
     Link: <http://localhost:3000/foo/.meta>; rel="describedby"
     
    -Now that we have the URL of the description resource, +

    Now that we have the URL of the description resource, we create a patch for adding the inbox in the description of the resource.

    curl -X PATCH 'http://localhost:3000/foo/.meta' \
     -H 'Content-Type: text/n3' \
    @@ -1078,9 +1079,9 @@ we create a patch for adding the inbox in the description of the resource.

    solid:inserts { <http://localhost:3000/foo/> <http://www.w3.org/ns/ldp#inbox> <http://localhost:3000/inbox/>. }.'

    After this update, we can verify that the inbox is added by performing a GET request to the description resource

    -

    curl 'http://localhost:3000/foo/.meta'
    +
    curl 'http://localhost:3000/foo/.meta'
     
    -With as result for the body +

    With as result for the body

    @prefix dc: <http://purl.org/dc/terms/>.
     @prefix ldp: <http://www.w3.org/ns/ldp#>.
     @prefix posix: <http://www.w3.org/ns/posix/stat#>.
    @@ -1089,7 +1090,7 @@ With as result for the body
     <http://localhost:3000/foo/> a ldp:Container, ldp:BasicContainer, ldp:Resource;
         dc:modified "2022-06-09T08:17:07.000Z"^^xsd:dateTime;
         ldp:inbox <http://localhost:3000/inbox/>;.
    -

    +

    This can also be verified by sending a GET request to the subject resource itself. The inbox location can also be found in the Link headers.

    curl -v 'http://localhost:3000/foo/'
    diff --git a/5.x/usage/seeding-pods/index.html b/5.x/usage/seeding-pods/index.html
    index 985ea8d5a..3bb5aa628 100644
    --- a/5.x/usage/seeding-pods/index.html
    +++ b/5.x/usage/seeding-pods/index.html
    @@ -886,12 +886,12 @@
     
     
     

    How to seed Accounts and Pods

    -

    If you need to seed accounts and pods, +

    If you need to seed accounts and pods, the --seededPodConfigJson command line option can be used with as value the path to a JSON file containing configurations for every required pod. -The file needs to contain an array of JSON objects, -with each object containing at least a podName, email, and password field.

    -

    For example: +The file needs to contain an array of JSON objects, +with each object containing at least a podName, email, and password field.

    +

    For example:

    [
       {
         "podName": "example",
    @@ -899,10 +899,10 @@ with each object containing at least a podName, email,
         "password": "abc123"
       }
     ]
    -

    -

    You may optionally specify other parameters +

    +

    You may optionally specify other parameters as described in the Identity Provider documentation.

    -

    For example, to set up a pod without registering the generated WebID with the Identity Provider: +

    For example, to set up a pod without registering the generated WebID with the Identity Provider:

    [
       {
         "podName": "example",
    @@ -912,7 +912,7 @@ as described in the Identity Provider d
         "register": false
       }
     ]
    -

    +

    This feature cannot be used to register pods with pre-existing WebIDs, which requires an interactive validation step.